In Which We Learn to Code
Ever have those days where you are absolutely certain you have somehow messed up the process despite getting something that produces the answer the book asks for?
I am learning about lists and loops in my Python book. Logically this specific exercise is clearly an extension of the whole "x = x + 1" conceptual problem which I had to get Mathfriend to explain to me in very small words but have a good handle on now.
You are given a list: xs = [12, 10, 32, 3, 66, 17, 42, 99, 20]
The assignment is to find the product of the list using a loop.
This works:
total = int(1)
for xs in [12, 10, 32, 3, 66, 17, 42, 99, 20]:
total = int(total * xs)
print(total)
It produces the desired result. If you omit setting total to 1 at the beginning it complains about total being undefined farther down, which I get. It depends on itself; it needs to start at something. And setting it to start at 1 doesn't mess with the end result. (The previous exercise was addition and it started at zero.)
I cannot shake the feeling I am getting some part of this wrong in some way, possibly in this being the wrong approach to it, but I can't figure out another possible one with the terms the book has described so far. Especially when the addition exercise did explicitly say "set it to zero to start." I just feel like a more elegant way to do it should exist.
(Also welcome to the posts where I complain about my coding lessons. Particularly in self-teaching I find it easier to actually sit down to do things if I'm writing up a Dreamwidth post about them, so you'll be getting some chronicling of my Adventures in Code coming up.)
I am learning about lists and loops in my Python book. Logically this specific exercise is clearly an extension of the whole "x = x + 1" conceptual problem which I had to get Mathfriend to explain to me in very small words but have a good handle on now.
You are given a list: xs = [12, 10, 32, 3, 66, 17, 42, 99, 20]
The assignment is to find the product of the list using a loop.
This works:
total = int(1)
for xs in [12, 10, 32, 3, 66, 17, 42, 99, 20]:
total = int(total * xs)
print(total)
It produces the desired result. If you omit setting total to 1 at the beginning it complains about total being undefined farther down, which I get. It depends on itself; it needs to start at something. And setting it to start at 1 doesn't mess with the end result. (The previous exercise was addition and it started at zero.)
I cannot shake the feeling I am getting some part of this wrong in some way, possibly in this being the wrong approach to it, but I can't figure out another possible one with the terms the book has described so far. Especially when the addition exercise did explicitly say "set it to zero to start." I just feel like a more elegant way to do it should exist.
(Also welcome to the posts where I complain about my coding lessons. Particularly in self-teaching I find it easier to actually sit down to do things if I'm writing up a Dreamwidth post about them, so you'll be getting some chronicling of my Adventures in Code coming up.)
no subject
My inference is that it's making you a little crazy that you need to multiply the value that you actually want by 1 in order to get the answer. You want the product, why can't you just multiply the values together like a normal person?
In point of fact, you can, but the way to do it with a loop is kind of clunky. The more elegant way to do it is with recursion, which requires the ability to create functions, something that I'm guessing your Python tutorial hasn't explained yet (when I learned programming at Wellesely, they taught recursion first, and loops as an extension of recursion, but this is not commonly the way that it's done).
Here's a way to avoid the total=1 step while still using loops:
myList=[12, 10, 32, 3, 66, 17, 42, 99, 20]
total = myList[0]
for xs in myList[1:]
total = int(total * xs)
To instead do it with recursion, you create a function that either a) if the list only contains one value returns that value, or b) if the list contains more than one value returns the first value in the list multiplied by the same function applied to the rest of the list.
Here's python code that does the same operations as the above, but recursively:
def rec_product(l):
if len(l)==0:
print("You cannot calculate the product of an empty list")
elif len(l)==1:
return l[0]
else:
return l[0]*rec_product(l[1:])
rec_product([12, 10, 32, 3, 66, 17, 42, 99, 20])
no subject
no subject
I think this is a good illustration of why the standard approach is to start with the multiplicative identity (one) as a running product, and fold in each list element -- the code ends up easier to read, as long as you're familiar with one being the multiplicative identity. (And zero being the additive identity.)
no subject
Oh, good for them! That's the right computer-science way to do it, and modern functional-programming courses often start that way (I'm currently facilitating a Scala course for a diverse group at work that does that), but it's still uncommon, yes.
no subject
no subject
x <- x + 1, or even more explicitlyx gets x + 1. The use of the equals sign is very dodgy and you have to remember it has nothing to do with math. But it's the standard syntax for programming languages these days. :-/For understanding why you start with
total = 1, consider why factorial of zero is one, and why 5 to the power of zero is one. :-)I have some feedback about your code if you're open to that -- some stylistic things.
no subject
no subject
First, you should be able to drop those
intcalls without changing the effect of the code. That functions converts non-integer numbers to integers, or reads a numeric string (such as"476") as an integer. Both of the expressions1andtotal * xsare already going to be integers.Second, when I see the variable
xsI would generally assume it's a collection of things. One x, multiple xs. Just a naming convention. If I define a function that takes a list of integers or non-descript things, I'll probably writexsfor the parameter name. But if I have a single such thing, I'll name itx.So the main part of your code might look like this instead:
total = 1 for x in [12, 10, 32, 3, 66, 17, 42, 99, 20]: total = total * xA non-exactly-coding thing is that I might call it "product" rather than "total". My first thought when reading the code was that you were adding things together, and then I had to revise my understanding when I saw the multiplication.
Again, your code is correct! But these are things you could change that would make it easier for me to read. (Of course, opinions may vary on things like naming.)
no subject
total = total * xtototal *= x-- it means essentially the same thing, and would be how I would write it since it's nicely terse and expresses the concept of "folding in a multiplication" in a way I can understand at a glance, without having to notice that "total" is on both sides of the "=". (You're not just assigning the result of any ol' multiplication to total, but a multiplication that builds on total.) But... I also remember it was confusing to me when I first encountered that syntax. And not all proficient programmers even like that syntax. I only mention it because you'll likely see that syntax sooner or later, and probably already have in cases likefoo += 1.no subject
I'm surprised to see this phrase, as it feels like a bit of an oxymoron.
I also use += and *= syntax myself, but I always feel a bit icky doing it because it feels *so* idiomatic. Like, if x=x+1 is hard for non-programmers, how is x+=1 an improvement?
Or maybe I only feel this way because I learned Java first, and C later. I dunno : \
no subject
I think a good bit of this habit comes from having programmed in Clojure for 10 years, where that's the predominant aesthetic and I could just write
(println (apply * [12 10 32 3 66 17 42 99 20]))and be done with it. To a Clojure programmer it's quite clear what that does, and writing anything longer would actually be more confusing -- because the reader would be trying to model it in their head as something more complicated, to go along with the longer code.no subject
I think I get what you're saying. On the other hand, isn't it also true in this case that the above is also the most elegant way of writing the code?
no subject
no subject
Modulo the code-style conversations above, no -- you've got it pretty much correct, at least for mutable code. (That is, conventional Python.) I could show you the Scala version, but it boils down to the same concepts.
Let's have a moment of Category Theory: this is a little deep for where you are so far, and feel free to ignore it, but you sometimes enjoy nerdy details and this is stuff that I often teach nowadays. Feel free to ping me with questions.
What you're seeing here is a concept that is known at the theoretical level as a Monoid. Don't worry about the name (there are reasons for it, but you have to draw graphs for them to make any sense) -- basically, it's the abstract-math concept of "plus".
The underlying notion is that you have a Monoid when you have a type (in this case, integers), a "combine" operator that takes two values of that type and results in another value of that type, and an "empty" value that, when combined with any other value leaves it unchanged.
So addition over integers is a Monoid, where the operation is "plus" and the empty is 0. Similarly, multiplication over integers is a Monoid, with an operation of "times" and an empty of 1. (The empty value is often called "zero"; I'm avoiding that here, in the interest of not defining "zero" as 1.) That's why your sum and product code needs different starting values: the different operators need different empties.
What makes this cool is that it allows you to abstract out those details: from this viewpoint, the "sum" and "product" of a collection of integers are exactly the same code, just changing which Monoid you are using. That's not an idle observation, either: in functional-programming languages like Scala, it's common to think in terms of those abstractions, and simply define that abstraction of "sum" once, and then you use that abstraction for all your Monoids.
Note that Monoids are actually really common. For example, think of Strings like "hello " and "world". Most programming languages allow you to say:
("hello " + "world") == "hello world"That makes sense because
+on String is another Monoid. (The "empty" value is the empty String, "".) I think that in Python you could write exactly your same code above with that Monoid, and it would just work.Anyway, enough theoretical nerding. Hope it isn't entirely confusing; questions welcome.