In a previous post we introduced the idea of a “toy model” in the context of investigating pollution in the Great Lakes. Today, I want to talk more about this idea of a “toy model,” explore a simple toy model that you can introduce in your classroom, and point you toward one resource that’s full of such ideas for students at the middle and high school level.

When a modeler approaches a problem in the real-world, they generally encounter something that is complicated, messy, and hard to get a handle on. It’s not clear what’s important and what’s not. It’s not clear what’s hidden underneath what can be observed and it’s often not clear where to even begin trying to understand what they’re seeing. There are many different approaches that modelers use when they find themselves at this point with a new problem. They may recall an analogous situation that seems similar enough to this new situation and use that observation as a starting point, trying to understand what’s different between what they see and what’s familiar. They may start with data and try and see what trends they notice or what patterns they can see from studying the data. Or, they might employ the mathematician’s strategy of what to do when faced with a problem where you don’t even know where to start – replace it with a simpler problem that you can solve but that is close enough to your original problem that you think you might learn something about solving the tough problem. When modelers use this strategy, they often say what they are doing is “considering a toy model” or “playing with a toy model.”

This strategy often leads to incredibly rich mathematics and insight into the real world problem. While you may still be many steps away from fully understanding the real world system, you’ve often found that path where at the very least you can start your journey. One beautiful example of this is what’s know as the Renyi Parking Problem. The Hungarian mathematician Alfred Renyi first posed this problem as a toy model of the more complicated problem of random packing. Random packing situations arise in many areas of scientific and industrial interest. When scientists investigate how molecules bind to the surface of some object, it’s really a random packing problem. When you ask the question “How many jellybeans are in this jar?” you are really asking a random packing question. The basic idea is simple – if you randomly place objects in some confined region of space, how much of that space will you fill up? What happens if those objects can push other objects that are already there around? What happens if sometimes an object leaves that space? You can imagine that the problem in any particular application can quickly seem complicated and overwhelming.

Renyi posed a simple to understand, but mathematically rich, toy model of random packing. He imagined a long street of some length, say L, where cars of unit length were allowed to park. He then asked “what happens if the cars park randomly in any unit interval that’s not already occupied along this street?” So, there’s no parking spaces, and the drivers are discourteous and don’t try and park close to other cars. The story of the investigation of this problem and open questions that still remain about it is quite interesting. But, here what we want to observe is what Renyi did. He didn’t take any particular real-world problem and make assumptions and abstractions and try and get to a problem he could mathematize. Rather, he simply posed a problem that he could easily mathematize, but had the “flavor” of the phenomena he as trying to understand. That’s the essence of constructing a toy model and part of the art of mathematical modeling. Often the modeler has to be able to “see through” all of the real-world complications to some underlying “toy” system that can be grasped, mathematized, and understood.

The book Adventures in Modeling by Colella, Klopfer, and Resnick is chock-full of examples of toy models that can readily be investigated in the classroom. The book focuses on exploring complex, dynamic systems and hand-in-hand introduces the reader to using StarLogo as a simple simulation tool. StarLogo is one of those programming environments that was specifically designed to be easy for students to learn and to serve as an entry point into computer programming. But, even if you ignore the StarLogo part of the book, the problems and the toy models introduced are alone worth the price.

One activity that they explore is called “Foraging Frenzy” and it’s a nice one for exploring mathematical modeling, toy models, and connecting your math classroom to biology and ecology. The underlying ecology problem is a central one to the field. How do you predict what an animal will do when foraging for food? You can imagine how complicated such a situation can get in context! Suppose we’re talking about field mice. How far will they roam? How will they decide? What happens if there are predators in the environment? How does their behavior depend on the season? Thought about in context, the problem is certainly one of those that can feel overwhelming. I’d even argue that it is one of those problems that if given directly to a group of students might very well end up with students saying “You can’t possibly predict what an animal will do when foraging for food!” So, what we often end up doing as teachers is just telling our students what happens and removing the whole exploration part from these complex problems. This is where I believe toy models can be very useful in the classroom.

What Colella et. al. do is to introduce a toy “foraging model” that does feel tractable and does feel like one where students can start seriously exploring and thinking about how to model. They say this – buy a big bag of dried kidney beans, get three stopwatches and a piece of paper. Now, assign two students to be “food givers” and give them each half of the beans and a stopwatch. Have them sit close to one another and secretly tell each of them the rate at which they are to give out their beans. For example, tell one student to give out a bean every five seconds and the other every fifteen seconds. Now, tell the rest of your students that their job is to get as many beans as they can, but they have to follow a few rules. They have to stand in line in front of one of the “food givers” and take a bean when it is given to them. They are allowed to switch lines at any time, but always must move to the back of the other line. After they get a bean they also must move to the back of one of the lines. Now, let them go! In the meantime, you’re using your stopwatch to gather data. Note the number people in each line at regular time intervals, say every 30 seconds. Let the whole process run for 5 minutes and then share what you’ve recorded with the class.

What you have presented students with is a really simple, accessible “toy model” of foraging behavior. It’s one for which you have data, is one that’s more manageable in scope, but also captures the essential features of the real-world ecological problem. Now, it’s time to discuss and think about modeling. If your students behave like many animals in the natural world, what you’ll see is that the length of lines becomes proportional to those rates of distribution that you set at the beginning. That’s something that’s called “Ideal Free Distribution” theory and is the basis for making those predictions about what a foraging animal will actually do.

I encourage you to make use of toy models liberally in your classroom as you introduce the notion of mathematical modeling. Let us know if you try this one out or come up with other neat “toy models.” We’d love to hear from you.

John

 

In his wonderful book, How Not to be Wrong: The Power of Mathematical Thinking, Jordan Ellenberg uses an excerpt from Mark Twain’s Life on the Mississippi to make an important point about fitting linear models to data. While Ellenberg’s book covers topics that extend well beyond mathematical modeling into areas one would commonly label as “quantitative reasoning,” he captures a heck of a lot about how modelers think and how a mathematical modeler approaches the world. Today, I want to borrow Ellenberg’s Mark Twain tale and discuss the importance of two words that appear in the CCSSM, namely, descriptive and analytic.

Let’s start with the excerpt from Twain’s Life of the Mississippi:

The Mississippi between Cairo and New Orleans was twelve hundred and fifteen miles long one hundred and seventy-six years ago. It was eleven hundred and eighty after the cut-off of 1722. It was one thousand and forty after the American Bend cut-off. It has lost sixty-seven miles since. Consequently its length is only nine hundred and seventy-three miles at present. . . . In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. This is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upward of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.

Now, Mark Twain was a funny guy and, of course, this was intended to be a humorous passage. But, it also well-illustrates the dangers of “modeling without thinking” and that’s what I’d like to caution against here. What Mark Twain was implicitly engaging in was the practice of what the CCSSM calls descriptive modeling. And, that’s a useful and important practice, done right. But, it has its limitations and it is precisely these limitations that drive the need for what the CCSSM calls analytic modeling.

Let’s first make sure we understand Mark Twain’s analysis.  How might we approach this “Mississippi shrinking” problem from a purely descriptive point of view? Well, from the excerpt above and from doing a little digging as to when the American Bend cut-off occurred, we have four data points:

Year         Length (miles)
1716        1215
1722        1180
1858        1040
1883          973

It’s a simple matter to plot these data points and fit a line to our data:

MarkTwain

If you click on the plot and examine it closely, you’ll see that we have an R-squared value of 0.9747! Well, that’s fantastic, it means more than 97% of the variance in our data is explained by our line! So, we have a mathematical model that tells us how the Mississippi is shrinking with time and we can now make predictions, right? Well, that’s really Mark Twain’s point. We can’t. In Life on the Mississippi, Twain extracted the slope of our line and found that according to our model, the Mississippi is losing about a mile and a third of length each year. In some sense, that’s right of course. But, in a more important sense, it is horribly wrong. The sense in which that’s wrong, is the sense in which descriptive mathematical modeling is limited, and is a tool that we have to wield very carefully. It’s also why, as mathematical modelers, we’re driven to seek the deeper sort of understanding that comes from analytic modeling.

The CCSSM has this to say about descriptive modeling:

In descriptive modeling, a model simply describes the phenomena or summarizes them in a compact form. Graphs of observations are a familiar descriptive model—for example, graphs of global temperature and atmospheric CO2 over time.

That’s a pretty good description of what Twain did in his passage. What’s important to note about descriptive modeling is that it is always an extra step removed from the real-world phenomena we are trying to understand. When we do descriptive modeling, what we’re actually doing is giving some shape to a data set. We’re describing that data, saying “this data looks like this function.” Yes, we make “looks like” very precise by doing what we call “regression,” but underneath, it’s still “this data looks like this function.” And, unless the underlying phenomena continues to behave exactly as it did when it provided our data set, our description won’t be useful for making predictions. That’s where we have to very carefully think things through. Do we have any reason to believe that the trend we see will continue? If so, how far? These are always questions we should be asking whenever we do descriptive modeling.

The CCSSM also talks about analytic modeling:

Analytic modeling seeks to explain data on the basis of deeper theoretical ideas, albeit with parameters that are empirically based; for example, exponential growth of bacterial colonies (until cut-off mechanisms such as pollution or starvation intervene) follows from a constant reproduction rate.

Teaching students to understand the difference between descriptive and analytic approaches is a crucial part of teaching the art of mathematical modeling. Descriptive modeling has its time and place and in many situations its the best we can do. But, I’d argue that we should always be pushing our students deeper, pushing them to question descriptive models carefully, and pushing them to really try and understand the world by developing their skills as analytic modelers.

John

Q: How did the chicken cross the road?

A: One step at a time. 

In various Twitter conversations with @Foizym, @ddmeyer, @MathMinds,  @gfletch, and @Simon_Gregg, the question of what mathematical modeling could or should look like at the elementary level has been a recurring theme. I want to explore this question a little bit in this post and offer a few ideas that may be helpful for those trying to incorporate mathematical modeling into their elementary math classrooms.

A good deal of the angst around teaching mathematical modeling at the elementary level seems to revolve around what it is that students already know. That is, they have a limited mathematical toolbox to draw from and a limited extra-mathematical toolbox to draw from. Put simply, there’s a lot they don’t yet know. And, I can’t argue with this as being a serious challenge. If the mathematics taught in K-8 was adequate for dealing with all the myriad problems that mathematical modelers attack, well, then we would never have needed to invent calculus, or differential equations, or algebraic topology, or… you get the idea.

And, it simply is true that the extent of the tools in your toolbox limits what projects you can successfully tackle. If all I have is a hammer and saw, it’s pretty hard to build a cathedral. If I really want to build a cathedral, I’ll probably spend a good deal of time using my hammer and saw to build other tools and then use those to build still other tools, and then get around to building a cathedral. The obvious analogy is that if I want to do mathematical modeling of fluid dynamics, and all I have is algebra, I would probably need to spend some time building calculus before I’d be able to get very far.

But, this, of course, is deeply unsatisfying. It almost kicks us back to the horrible answer to a students’ question of “what is this good for?” of “wait, you’ll see in a few years.” So, if we reject that answer, and yet realize that students at the elementary level do have a limited toolbox, what do we do? I’d like to offer two answers.

The first of these is that we can assiduously look for those problems that can be tackled with a limited toolbox. This is hard! Once you own a table saw, you’re not so likely to whip out your handsaw any more, so when we survey the scientific literature, we’re not so likely to find many problems that rely only on elementary mathematics. That doesn’t mean that they don’t exist nor that the problems we do find couldn’t also be attacked by elementary means, just that what we’re seeking won’t be sitting right on the surface for us to discover. It means that someone seeking to find good problems for elementary students will themselves have to be a pretty darn good modeler. They’ll have to not only be able to digest the modeling literature that uses advanced mathematics, but also be able to see how such problems could be attacked more simply, or how parts of those problems could be attacked using elementary math. That’s a tall order but I think this is a potentially useful approach. That is, I’m arguing that rather than try and construct elementary mathematical modeling projects from scratch, lets have elementary experts sit down with modelers, plow through some hard problems, and see what comes out the other side. Could such a team identify the accessible projects? I don’t know, but I think it is worth a try. I offer our previous discussion of “fairy circles,” albeit aimed at high school, as an example of what I mean here.

So, what’s the second answer? I’d argue that another approach we could take is to think really hard about why tools like calculus were developed in the first place. I’d argue that while much of the mathematical machinery used day-to-day by mathematical modelers is inaccessible to young students, a lot of the ideas behind that machinery are actually very accessible. The simplest and perhaps broadest example that occurs to me is the notion of iteration. Much of what we do in science and much of what we do as modelers is to build on a very simple idea that I’ll state like this – right now, the world looks exactly like it did a little bit ago, but with some tiny changes. That is, calculus, differential equations, and a whole bunch of the mathematical machinery that modelers rely on was built to be able to model change. And, the idea that is repeatedly used is the one we just stated, things change in small steps.

So, if I want to know how the chicken crossed the road, the useful answer is “one step at a time.” I then build up my understanding of “chicken crossing” from a chicken repeatedly executing single steps. If I do this over and over, I get my chicken where he wants to be. To see how we might follow this line of thinking to get from a “standard” calculus-based modeling topic down to the elementary level, let’s look at a simple, canonical mathematical model that you’ll see in any modeling textbook – population growth.

Typically, in studying population growth (or how the amount of any “stuff” increases or decreases with time), you’ll first examine linear growth. If x(t) is the amount of stuff at time t, you’d write:

\frac{dx}{dt} = A

So, what are we really saying? Well, we’re saying that the rate of change of x is a constant. We can view that differently if we approximate our derivative by it’s difference quotient:

\frac{x(t+\delta t)-x(t)}{\delta t} = A

Or, rearranging:

x(t+\delta t) = x(t) + A \delta t

Now, what does this really say? Well, it says the amount of stuff we’ll have in the future, x(t+\delta t), is equal to the amount of stuff we have now, x(t), plus a tiny change, A \delta t. That is, in a little bit, the world looks just like it does now, with a tiny change. We do this repeatedly, and we get a picture of how our system changes according to the underlying process of changing by constant small steps.

Well, that’s not really very hard to grasp at all! So, imagine we introduced this idea to young students. Imagine we showed them a simple table of data that gave the population of something for different times. Maybe it looks like this:

Time    Population
0                 1
1                 2
2                 3

and so on… Often, we’ll give data like this and ask students to predict what comes next or to discover the pattern. Now, what we’re doing is asking them to guess at the underlying process leading to the data they see. If a student’s answer (model) looks like “the amount of stuff we have later is equal to the amount of stuff we have now plus a little bit,” that seems to me like a pretty powerful realization.

You can imagine then presenting students with different systems that change with time in different ways and have them engage in the modeling exercise of trying to understanding the key underlying process. Their model might be as simple as a written statement, but one they can test by doing repeated addition, subtraction, multiplication, and division. Heck, they might even start to wonder if their is a more convenient or powerful way to do this stuff…

I guess at the end of the day what we’re arguing is once again that mathematics is the science of pattern and mathematical modeling is about applying that science to patterns that occur in the real world. If we keep this in mind, and look for those patterns in the real world whose origin or dynamics can be explained by some iterative process that at its heart consists of addition, subtraction, multiplication, and division, we might indeed be able to engage students at all levels in genuine, interesting, engaging mathematical modeling investigations.

John

 

 

Last time, we introduced the fairy circles of Namibia and talked about the idea of self-organization as a possible explanation. In case you’ve forgotten, these are bare patches in the desert, roughly circular, roughly the same size, that form over a huge swath of land. They look like:

Fairy circles

Fairy circles

We talked about recent attempts to explain these circles using a mathematical modeling approach and noting that current models are well, complicated mathematically. We ended with the claim that yet, this might just be a good modeling investigation for the high school classroom.

So, today, I want to follow that thread and think about how your students might get a handle on this problem and what type of investigation they might be able to carry out. Here, I’m going to argue that this is a problem where a fairly simple model can be proposed, but that what’s essential to making it accessible is having students carry out the “analysis” of the model via simulation.

Let’s think about the fairy circles again for a moment. If you read the Ecography paper we mentioned last time or even if you just read the summary in Science News, you quickly get a sense that the main proposed driver of the growth of fairy circles is competition between plants for the scarce resource of water. There’s more to it than this, but as a modeler, it’s worth ignoring all the complications and just thinking through this simple bit a little more. Remember, when we’re doing mathematical modeling, part of our job is to do what we’re doing here – make a guess about what we think is driving what we see, and then build a model around the guess. If our model reproduces what we see based on this driver, we have a little more evidence that our guess is the correct one. If not, we iterate!

So, suppose we have a whole bunch of randomly dispersed plants in some region. Perhaps the situation looks something like this:

startingconfig

 

To create this picture, I choose 3010 points at random in a restricted subset of the plane. I then plotted small dots at 3000 of these points. At the other 10 points, I plotted larger, red dots. What I’m imagining here is that we have a bunch of randomly dispersed plants and that most of these are “small” plants, i.e. the 3000 dots. The other 10 plants are “large” plants. They are going to be a bit special.

Now, what I imagine next is that as time evolves, each small plant has some probability of dying. Nothing special at all so far! We have a bunch of plants and over time, they can die. Here’s where we are going to introduce this idea of competition. Suppose the probability of a small plant dying is inversely proportional to how far it is from the nearest big plant. So, we’re saying that if I’m close to a big plant, it’s going to suck up all the water and I’m more likely to die. Well, if then let time proceed in discrete steps, at each time step rolled a die to see whether or not each plant in our picture lives, and removed the dead ones from our picture, we could then see whether or not a pattern of bare spots evolves in our system. Designing a simulation to do this would be a test of our model and a test of our hypothesis that it’s this form of competition creating fairy circles.

Note that we’re not using very sophisticated mathematics at all right now. Let’s be really explicit about what our model looks like. Here it is:

(1) We assume that there are two types of plants in our system, “big” and “little.”

(2) We assume our plants are randomly distributed in some fixed region.

(3) We assume time can be modeled as proceeding in discrete steps.

(4) With each small plant, we associate a probability of dying that is inversely proportional to its distance from the nearest big plant.

(5) We let time proceed in our system, rolling a theoretical die to determine whether or not each small plant dies during that time step, and remove it from the picture if it does.

(6) After awhile, we look at our picture and see if any patterns emerge.

Now, in a second I’ll show you what doing this looks like and talk a bit more about the simulation. But, first, let me note that you can explore this idea of pattern formation in  a really simple way in your classroom. Imagine literally doing this with your students. Literally. That is, suppose you identified the student in the middle of your classroom as the “big” plant. Then, assign students probabilities that depend on how close they sit to this “big” plant student. Now, generate random numbers, and have those students who “die” get up and walk to the back of the room. If you do this for a few iterations, what does the space defined by the empty seats look like?

Well, when you do this with the 3010 plants in the above picture, after a few iterations, you see the following:

endingconfig

 

Huh. We have bare patches, roughly circular, that kind of look like fairy circles! So, how we solved the mystery? Well, no. Note that there is a really important difference between our fairy circles and the ones we see in nature. Our circles all have a live plant smack dab in the middle of them! We’ve really shown how fairy donuts might form, rather than fairy circles. Well, this is quite interesting and begins to suggest to us that the competition driving the formation of fairy circles might be a little more complicated or subtle than we first supposed. At the same time, it nicely highlights why mathematical modeling is an iterative process. We started with an observation, made a guess about what drove what we saw, built a model around that guess, analyzed the model through simulation, and then compared the results of our analysis with our original observation. The difference between what our analysis led to and what we originally observed now forces us to modify our guess, improve our model, and… around we go again.

While I don’t know that your students will be able to totally solve the fairy circle mystery, I can easily imagine that they can carry out an investigation like this one. Going around the modeling cycle together a few times in an investigation like this one, seems quite worthwhile. You may not get to the “end” or completely solve the mystery, but I hope that will serve as a reminder that all science is provisional, that these investigations build over time, that we learn a bit at each step, and that a lot of the real fun comes in the investigation rather than the solution.

So, to carry out this type of investigation, it’s obviously important that students have the ability to sketch out and quickly use a simulation tool. As I’ve mentioned before, there are many free options that lets students do this. For this particular one, I used one of my favorites, an open-source, easy to use, but powerful language called Processing. I’ll paste my Processing code below in case you want to give it a try. As always, let me know if you want to talk fairy circles some more. We’re always happy to help and talk more about modeling.

John

Processing Code – Uses Processing 3

//Sketch to simulate proposed mechanism for Fairy Circles
//John A. Pelesko, 8/24/2015

void setup()
{
//Create the space we will work in.
size(500,500);
background(255);
//Create a 2-dimensional array for our “big” plants
int colBig=2;
int rowBig=10;
int[][] BigPlants = new int[colBig][rowBig];
for(int i=0; i<colBig; i++)
{
for(int j=0; j<rowBig; j++)
{
float r = random(500);
int s = int(r);
BigPlants[i][j]=s;
}
}

//Now display Big Plants as large ellipses
for(int j=0; j<rowBig; j++)
{
fill(250,5,21);
ellipse(BigPlants[0][j], BigPlants[1][j], 15, 15);
}

//Create a 2-dimensional array for our “little” plants
int colSmall=3;
int rowSmall=3000;
int[][] SmallPlants = new int[colSmall][rowSmall];
for(int j=0; j<rowSmall; j++)
{
float r1 = random(500);
float r2 = random(500);
int s1 = int(r1);
int s2 = int(r2);
SmallPlants[0][j]=s1;
SmallPlants[1][j]=s2;
SmallPlants[2][j]=1;
}

//Now display Small plants as small ellipses
for(int j=0; j<rowSmall; j++)
{
fill(15,15,15);
ellipse(SmallPlants[0][j], SmallPlants[1][j], 5, 5);
}

//Save a picture of where we start
save(“startingconfig.jpg”);

//For our small plants we have included a third column of data
//The 3rd column will be the state variable with 0=dead, 1=alive

//Setup an array to hold distances that we compute and probabilities
//To compute the values for the probabilities, we want to find the distance between a given small plant and each large plant
//Then, we take the smallest distance and set the probability of death as exp(-a*distance)
//The parameter a is an adjustable competition factor
float[] distarray = new float[rowBig];
float[] probarray = new float[rowSmall];

//Now, loop through each row of the array of small plants, for each row, compute distance to each big plant and store
for(int j=0; j<rowSmall; j++)
{
for(int i=0; i<rowBig; i++)
{
float xdif = sq((SmallPlants[0][j]-BigPlants[0][i]));
float ydif = sq((SmallPlants[1][j]-BigPlants[1][i]));
distarray[i] = sqrt(xdif+ydif);
}

//Find shortest distance in distarray
float s = min(distarray);
//Set probability for jth small plant
float prob = exp(-0.01*s);
probarray[j] = prob;
}

//Now, evolve the system
int timesteps = 3;
for(int i=0; i<timesteps; i++)
{
for(int j=0; j<rowSmall; j++)
{
float r1 = random(1);
if(r1<probarray[j])
{
SmallPlants[2][j]=0;
}
}
}

//Finally, display evolved system, only showing live plants
background(255);
for(int j=0; j<rowBig; j++)
{
fill(250,5,21);
ellipse(BigPlants[0][j], BigPlants[1][j], 15, 15);
}

for(int j=0; j<rowSmall; j++)
{
if(SmallPlants[2][j]==1)
{
fill(15,15,15);
ellipse(SmallPlants[0][j], SmallPlants[1][j], 5, 5);
}
}

//Save a picture of where we end
save(“endingconfig.jpg”);

}