I often find that I find mathematical concepts much easier to understand if they're presented as Python code rather than math notation. Someone should write a book like that.
Algebraic notation breaks just about every rule programmers are taught about keeping their code human readable. For example:
Variable and function names should be descriptive
Don't cram everything into one line
Break up large statements
Consistency is key
Don't be fancy for fancy's sake, don't over-optimize (this is for learning, remember?)
Add in-line comments for lines that aren't easily grasped
Be explicit where possible (it's a convention to omit the multiplication operator when multiplying variables because variables are only one letter anyway...)
And then we force kids to cram the whole stdlib (or rather its local bastardization) into their heads or at best give them intentionally bad (uncommented) documentation during exams while wondering why so many just don't seem to get it, even resent it.
I feel like this isn't quite fair to math, most of these can apply to school math (when taught in a very bad way) but not even always there imo.
Its true that math notation generally doesn't give things very descriptive names, but most of the time, depending on where you are learning and what you are learning, symbols for variables/functions do hint at what the object is supposed to be
E.g.: When working in linear algebra capital letters (especially A, B, C, D as well as M) are generally Matrices, v, w, u are usually vectors and V, W are vector spaces. Along with conventions that are largely independent of the specific math you are doing, like n, m, k usually being integers, i or j being indices, f and g being functions and x, y, z being unknowns.
Also math statements should be given comments too. But usually this function is served by the text around the equations or the commentary given along side them, so its not a direct part of the symbolic writing itself (unlike comments being a direct part of source code). And when a long symbolic expression isn't broken up or given much commentary that is usually an implicit sign that it should be easy/quick for the reader to understand/derive based on previously learned material.
Finally there's also the Problem with having to manipulate the symbols. In Code you just write it and then the computer has to deal with it (and it doesn't care how verbose you made a variable name). But in math you are generally expected to work with your symbolic expressions and manipulate them. And its very cumbersome to keep having to rewrite multi-letter names every time you manipulate an expression. Additionally math is still generally worked on in paper first, and then transferred into a digital/printed format second, so you can't just copy + paste or rely on auto completion to move long variable names around, like you might when coding.
You can't except learning the science of abstraction by making it concrete. Exampled are not more than examples and if one field required abstract theory, it is indeed the mathematics.
Well, a lot of these points are really more about readability than they are about reducing the abstraction. Smaller, labeled chunks of information are easier to process than larger ones with no anatomy.
But even so, abstractions, especially in programming, are often made because a pattern was noticed between concrete examples. Teaching the abstraction first or even alone does inherently skip a lot of context for why it was made in the first place. Sometimes, you need to know what problem a function is solving before you can truly know the function.
Yep, my issue is with the presentation, not the actual content. I've also experience my share of elitism from people who seem to think that you either get it or are too stupid/lazy, there couldn't possibly also be an issue with the teaching methods and notation.
Functional programming is much more math oriented and I think works well here, as it likes to violate a lot of these rules as a rule. I think it's what makes it so challenging and so obvious for different folks.
That's an interesting notion.
For you, is it when it's presented like: sum = sum([1,2,3]), or when it's dropping in and explaining how the sum function is implemented?
I think there's definitely something there in either case, but teaching math through "how you would implement it in code" seems really interesting. You could start really basic, and then as you get to more complicated math, you keep using the tools you built before. When you get to those "big idea" moments, you could go back to your old functions and modify them to work in the new use case while still supporting the old. Like showing how multiplication() needs to change to support complex numbers without making anything else different.
but teaching math through “how you would implement it in code” seems really interesting.
This is near exactly how I handled learning advanced mathematics back in the late 80s and early 90s. This method takes the abstract and makes it practical, which is what many people really need in order to effectively learn.
I know this is just a simple example but sum() doesn't teach you about the concept of sums. It would have to be something like:
def sum_up(my_list):
result = 0
for item in my_list:
result = result + item
return result
Then you could run that through a debugger and see how the variables change at every step. That way you can develop an understanding of what's going on there.
This was so much me with the concept of generalized Cartesian product. All the class was very confused with that topic, until a bright classmate pointed-out a relationship of that concept with Python list and it started to do so much sense.