Expanding a bit on what others have said, for anybody who is further interested (simplified; this whole discussion could be pages and pages of explanation)
The code we write (source code), and the code that makes the machine do its thing (executable code) are usually very different, and there are other programs (some are compilers, others are interpreters, I'm sure there are others still) to help translate. Hopefully my examples and walkthrough below help illustrate what others have meant by their answers and gives some context on how we got to where we are, historically
At the bare metal/electricity flowing through a processor you're generally dealing with just sequences of 0s and 1s - usually called machine code. This sequence of "on" and "off" is the only thing hardware understands, but is really hard for humans, but it's all we had at first. A program saved as machine code is typically called a binary (for formattings sake, I added spaces between the bytes/groups of 8 bits/binary digits)
A while later, we started to develop a way of writing things in small key words with numerical values, and wrote a program that (simplified) would replace the key words with specific sequences of 0s and 1s. This is assembly code and the program that does the replacements is called an assembler. Assemblers are pretty straight forward and assembly is a close 1:1 translation to machine code; meaning you can convert between the two
LD A, 0x01
LD B, 0x01
ADD A,B
LD H, 0x00
LD L, 0x00
LD (HL), A
These forms of code are usually your executable codes. All the instructions to get the hardware to do its thing are there, but it takes expertise to pull out the higher level meanings
This kind of writing still gets tedious and there are a lot of common things that you'd do in assembly that you might want shortcuts for. Some features for organization got added to assembly, like being able to comment code, add labels, etc but the next big coding step forward was to create higher level languages that looked more like how we write math concepts. These languages typically get compiled, by a program called a compiler, into machine code, before the code can run. Compilers working with high level languages can detect a lot of things and do a lot of tricks to give you efficient machine code; it's not so 1:1
This kind of representation is what is generally "source code" and has a lot of semantic things that help with understandability
int main() {
int result = 1+1;
}
There are some, very popular, high level languages now that don't get compiled into machine code. Instead an interpreter reads the high level language and interprets it line by line. These languages don't have a compilation step and usually don't result in a machine code file to run. You just run the interpreter pointing to the source directly. Proprietary code that's interpreted this way usually goes through a process called obfuscation/minimization. This takes something that looks like:
It condenses things immensely, which helps performance/load times, and also makes it much less clear about what the code is doing. Just like assembly, all the necessary info is there to make it work, but the semantics are missing
So, to conclude - yes, you can inspect the raw instructions for any program and see what it's doing, but you're very likely going to be met with machine code (that you can turn into assembly) or minified scripts instead of the kind of source code that was used to create that program