Some time ago, I had the pleasure of spending part of my summer working with a local high school teacher (Chuck Biehl), an undergraduate mathematics education major (Alexandrea Hammons), and a math education faculty member (Alfinio Flores), on a project we just called the “Bubble Board.” At the time, our interest was in developing a simple hands-on project that Chuck could take back to his classroom and where his students could gather data and learn a few things about curve fitting using a data-set they’d gathered themselves. We wrote this up as an article for the Ohio Journal of School Mathematics. You can find the full article here.

Today, I thought I’d revisit this project and talk a bit about it from the perspective of mathematical modeling. The Bubble Board is a great system in that it’s very simple to build and use, and in that students can gather data using nothing more than a stopwatch, a pencil, and paper. At the same time, the behavior of the system is interesting, and yet mathematically accessible for a wide-range of students.

The Bubble Board was originally designed by the physical chemist, Goran Ramme of Uppsala University in Sweden. Like many scientists before him, from Isaac Newton to Lord Rayleigh, Ramme’s been fascinated with soap films. It was from his wonderful 2006 book, Experiments with soap bubbles and soap films, that I first learned of the Bubble Board.

In designing the Bubble Board, Ramme was interested in devising a way to measure the average lifetime of a soap bubble. You blow a bubble and eventually it pops, but if you blow many bubbles and measure how long it takes each one to pop, what does the distribution of bubble lifetimes look like? Ramme’s Bubble Board gives you a way to blow a whole array of soap bubbles all at once. Here’s a picture of the version of the Bubble Board that we made:

As you can see, the system is simple. You have a latex sheet with an array of 56 identical, evenly spaced holes drilled into the sheet. Through each hole, you place a soda-straw so that about 2cm of the straw pokes through one side and the rest of the straw hangs below. The short end of the straws are then dipped, en masse, into a soap solution creating a flat soap film over the top of each straw. The board is then flipped and the long-end of the straws submerged in a water tank. The water, of course, rises in each straw and the resulting pressure “blows” a bubble at the other end of the straw. You end up with an array of identically-sized soap bubbles.

(Bonus Modeling Problem – How big will each soap bubble be? What is the relationship between how far you submerge the straws in water and the radius of each bubble?)

Now, Ramme approached the Bubble Board from the perspective above. That is, he approached the Bubble Board as a tool for measuring the lifetime of a large array of bubbles simultaneously, thereby building a picture of the distribution of bubble lifetimes and gaining insight into the average lifetime of a soap bubble. We approached the Bubble Board from the point of view of dynamical systems. That is, if you create this array of identical bubbles all at the same time, how does the population of bubbles evolve with time? Or, more simply – How many bubbles will be left at time t?

The dynamical systems perspective brings the Bubble Board into the world of population dynamics. This is of obvious interest in fields like ecology, where one wants to understand how the population of a given species, or group of species, changes with time. The study of population dynamics and the mathematical modeling of these types of problems has led to much beautiful and interesting mathematical work of broad applicability.

So, let’s think about the Bubble Board from this perspective a little bit and think about the Bubble Board from the point of view of mathematical modeling. When we first started building our Bubble Board and still hadn’t conducted any experiments, we reasoned as follows: “Well, if you have more bubbles at any given time, more are going to pop in the next instant of time, so the population of bubbles should decrease in a way that’s proportional to the population at any given time.” In other words, exponentially. That is, we argued that the rate of change of the total bubble population, P(t), should be proportional to the bubble population:

(1)   \begin{equation*} \frac{dP}{dt} = -r P(t) \end{equation*}

Here, r, is the rate of decay of our bubbles. Well, we’ve seen this equation before in this space and we know that the solution looks like this:

(2)   \begin{equation*} P(t) = P_0 e^{-rt} \end{equation*}

Here, P_0 is the number of bubbles at time zero. So, we expected our bubble population to simply exhibit exponential decay. Then, Alex (Alexandria) went to lab and started measuring. Rather than the nice exponential decay we expected, Alex found this:

In this figure, the different colors indicate different types of soap solution, but here, let’s just focus on the purple or blue data points. Clearly, the data is not purely exponential. For some reason, the decay curve starts out somewhat flat and then exponential behavior seems to take over and drive the decay. Now, I haven’t put this discussion in the context of the modeling cycle, but hopefully you can see this as an example of how the cyclic nature of mathematical modeling arises naturally through comparison of model prediction and real-world data. We started with our hypothesis about how the system should behave, built our mathematical model and predicted a decay curve that was purely exponential. But, when comparing to the real-world, we see that we were clearly wrong! Well, we got the decay part right and part of the curve looks exponential, but certainly, there is some important behavior in our system that our model is not capturing.

So, we need to go back and revise our model and see if we can glean a deeper understanding of our system. Thinking about our array of bubbles a little more carefully we realize that if it’s true that bubbles have a common average lifetime, then near the start of the experiment very few bubbles should actually be popping. For example, if your average bubble lives for one minute, then near time zero, i.e. the start of the experiment, only a few “outlier” bubbles should pop. Most bubbles should persists and then as time gets close to one minute we should start to see your typical bubble pop. Here, the behavior should look like exponential decay as when your “average” bubble is popping, the number popping should be proportional to the number of bubbles you have. As you get well past one minute, you should again only see your “outlier” bubbles and they too should eventually pop.

How might we modify our mathematical model to capture this behavior? Well, in our original model we assumed a constant rate of decay. We called this constant r and said that for all time our population should decay at this fixed rate. But, now, we’re saying that for short times, this rate should be small and should increase to some constant rate only as time gets close to the average decay time of our bubbles. That is, our look at the data and our new hypothesis about how our population behaves implies a decay rate that varies with time rather than remains constant. Mathematically, we can achieve this by modifying our model like this:

(3)   \begin{equation*} \frac{dP}{dt} = -r P(t)(1-P(t)/M) \end{equation*}

If we think about this new term as being lumped together with the rate, r, that is, if we think about this as being our rate:

(4)   \begin{equation*} r(1-P(t)/M) \end{equation*}

then our rate of decay is small when the population is large, as we expect, and gets larger as the population shrinks. In fact, as the population shrinks, the new term becomes negligible and our model approximately becomes one of exponential decay. This new model is called a logistic model and the solution looks a little different than our previous solution:

(5)   \begin{equation*} P(t) = \frac{P_0 M}{P_0 + (M-P_0) e^{rt}} \end{equation*}

More importantly, the shape of the decay curve looks a lot more like the one we observed experimentally:

So, we can feel a bit more comfortable in that our model captures the real-world behavior more accurately. Of course, more work remains to be done! How, for example, does the constant M in our new model relate to properties of our soap bubbles? How does the constant r relate to these properties? Is there some reason to believe our variable rate is the right one?

Hopefully you enjoyed our detour into Ramme’s Bubble Board and can see it as a hands-on way to introduce your students to some interesting mathematical modeling questions and to the broader topic of population dynamics. The system lends itself to investigation by students across a wide-range of mathematical background, so whether you investigate the simple problem of predicting bubble size as a function of the depth the straws are submerged in the water, or the more complex problem of predicting how the population size changes with time, I think you’ll find something here to enjoy.

John

 

Well, the last few months of 2016 went by much too quickly and unfortunately left me with little time to post. But, it’s a New Year, and I’m anxious to get back to talking about mathematical modeling. So, Happy New Year! Now, let’s get back to work.

Recently, I found myself thinking about several points that we’ve explored in earlier posts. One of these, explored in “Caught or Taught?“, is the idea that mathematical modelers often draw upon a library of canonical mathematical models that they have at their fingertips when they approach a new problem. That is, they often reason by analogy, and use situations and models with which they are familiar as a starting point for thinking about new, unfamiliar, situations. The second point that’s been on my mind is the one explored in “Arduino as a simple tool for hands-on modeling activities,” and is the idea that the widespread availability of low-cost microcontrollers and sensors opens up new possibilities for hands-on activities in the modeling classroom. For many years at the University of Delaware, I’ve taught a mathematical modeling course where we’ve had students engage in hands-on experiments in our own laboratory. I’m constantly amazed that experiments which cost us thousands of dollars to perform just ten years ago can now be carried out at home on your desktop with just a few dollars in equipment.

So, today, I thought I’d explore a canonical mathematical model, but do it in a way that was hands-on and made use of accessible, low-cost technology. Along the way, I’ll point out some problems where you and your students can explore further. The basic mathematical model, exponential decay, is one with which you’re surely familiar, and is in-fact, one of the “starred” domains in the Common Core State Standards. Of particular relevance are the standards:

Distinguish between situations that can be modeled with linear functions and with exponential functions.

Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another.

Interpret the parameters in a linear or exponential function in terms of a context.

To carry out this project, I enlisted the aid of my daughter, Julia, and this weekend, we spent time playing with potatoes. What more could a high-school student ask from their weekend? The question that we sought to answer was this – Which cools faster, a large potato or a small potato? Somewhat surprisingly, when we polled a few unwitting participants as to their answer to this question, two schools of thought emerged. One school of thought held that the small potato would cool faster as it “held less heat” and hence as it shed energy its temperature would drop faster. The other school of thought held that the large potato would cool faster as it had a larger surface area and that the rate of its losing energy would be hence be greater. Who’s right?

To explore this question, we decided we’d first build a mathematical model and try and make a prediction. Then, we’d design and carry out an experiment, compare, and see if we could both demonstrate an answer and understand why potatoes behave however they behave. For our model we were, of course, treading well-trodden ground. Examine the index of any introductory calculus textbook or any introductory physics textbook and you’ll find an entry for “Newton’s Law of Cooling.” Turn to the page referenced and in the calculus text, you’ll find yourself in the chapter or section on exponential and logarithmic functions. This goes back to our earlier point about canonical models. This mathematical model is certainly not new, but the idea that systems exhibit exponential growth or decay is so useful and encountered so frequently, that it is worth exploring models like these, deeply. So, without extensive derivation, here’s our mathematical model for the temperature of a potato:

(1)   \begin{eqnarray*} m c_p \frac{dT}{dt} = - hA (T-T_A) \\ T(0) = T_0 \end{eqnarray*}

Here, the unknown is the potato temperature, T(t). Room temperature is T_A and initially the potato is at some higher temperature, T_0. There are four parameters in the model. The mass of the potato, m, the specific heat, c_p, which measures the amount of energy needed to raise a unit mass of potato one degree in temperature, the surface area of the potato, A, and the heat transfer coefficient, h, which measures how fast the potato loses heat energy to the surrounding environment. We note that this model can be thought of as a statement of the principle of conservation of energy. The equation simply says the change in the energy of the potato is equal to the energy lost to the surrounding environment. The left-hand term is this change in energy, and the right-hand term relies upon Newton’s Law of Cooling which says that the energy lost to the surrounding environment is proportional to the difference between the temperature of the body and the temperature of the surrounding environment.

Now, we know that the exponential function is this very special function whose rate of change is everywhere proportional to itself. Our mathematical model says that the function we’re after, T(t), has this property that its rate of change is everywhere proportional to itself. Hence, our mathematical model is easily solved for T(t):

(2)   \begin{equation*} T(t) = T_A + (T_0 - T_A)e^{-\frac{hA}{mc_p}t} \end{equation*}

We see that the rate at which our potato cools is exponential, yes, but more importantly, how fast this decay happens for a particular potato is governed by the ratio of the four parameters in our problem:

(3)   \begin{equation*} \alpha = \frac{hA}{mc_p} \end{equation*}

Recall that we want to know whether a “big” potato will cool faster or slower than a “small” potato. The answer lies in interpreting our model and in particular, in interpreting \alpha. For each potato, since they differ in mass and surface area, we’ll have a different \alpha. Suppose we call the \alpha for our small potato \alpha_S and for our large potato, \alpha_L.If we examine the ratio \frac{\alpha_S}{\alpha_L}, this ratio will give us our answer. If it’s bigger than one, the small potato must cool faster, if it is less than one, the large potato must cool faster. But, also notice that if we assume our potatoes are made of the same “potato-stuff” then h and c_p are the same for each potato, so this ratio only depends on a combination of potato masses and surface areas. In particular, this ratio reduces to:

(4)   \begin{equation*} \frac{\alpha_S}{\alpha_L} = \frac{A_S m_L}{A_L m_S} \end{equation*}

Here, the subscripts denote the small and large potatoes, as above. So, off to the supermarket we traveled where we bought two standard baking potatoes, one large, one small. The masses were easy to measure with our kitchen scale:

(5)   \begin{eqnarray*} m_S = 194.8g \\ m_L = 330.4g \end{eqnarray*}

But, how to measure potato surface area? (Here’s a problem for further exploration. How do you compute the surface area of a potato? How do you measure it?) I left Julia to tackle this question and she decided that this:

rather resembled this:

and after some measurements and computations arrived at:

(6)   \begin{eqnarray*} A_S = 137.13 cm^2 \\ A_L = 217.6 cm^2 \end{eqnarray*}

Putting this all together, we arrived at:

(7)   \begin{equation*} \frac{\alpha_S}{\alpha_L} \approx 1.06 \end{equation*}

and hence our mathematical model leads us to predict that this particular small potato should indeed cool faster than this particular large potato.

Our next step was to conduct some potato experiments. But, before we go there, let me point out another problem for future exploration. We’re making a prediction for our particular two potatoes. In this case, we predict that the small potato should cool faster than the large potato. But, is this always going to be the case? Surely, if we took our large potato and stretched it out into something resembling a giant French fry it would cool faster. Wouldn’t it? How does our ratio, \alpha, depend on potato shape? Can you find two potatoes that you would call “large” and “small,” where the large potato should cool faster?

Now, on to experimental potatoes. For our experiment, we used a low-cost microcontroller called a Particle Photon (\char36 19) and a TMP36 temperature sensor (\char36 1.50). We wrote Python code to carry out the sensing, gather data every minute, and store the data to a file for later analysis. This let us get lots of data for each potato, carry out the experiment over a long-time (one and a half hours), and not need to be there to monitor the experiment. If you’re interested, I’ve pasted the Python code at the bottom of this post for you to use or copy as you see fit. Now, if you don’t want to go the route of microcontrollers and sensors, all you need to carry this experiment out is a way to measure temperature and a watch. You could use a Vernier temperature probe or even a good old-fashioned glass thermometer. To heat our potatoes we placed each one in the microwave oven for five minutes. We then stuck our probe into the middle of the potato as best we could, sat back, and let our potatoes cool. Here’s our simple setup:

And, here’s our data:

As you can see, the small potato achieved a higher temperature initially, but, as predicted, cools at a faster rate. Since we placed each potato in the microwave for the same length of time and the small potato has smaller mass, it makes sense that its initial temperature should be higher. The transient behavior at the start also makes sense – it takes time for the probe to get to potato temperature. It’s exciting to see that our model and our analysis of \alpha yield a correct prediction about which should cool faster. By this point, Julia’s potato-patience was wearing thin, so we left further analysis for another day. But, here’s one final suggestion for exploration for you and your students. If you take the data above (or your own data) and fit an exponential to the exponential part of the curve, your fit will give you an experimental value of \alpha for that potato. If you take the ratio of the two values, how close do you get to 1.06?

Well, I hope you’ve enjoyed thinking about this canonical mathematical model and thinking a bit about hot potatoes. Best wishes for a fun year of mathematical modeling!

John

 

 

[code language="python"]
#Code for temperature monitoring using Particle Photon
#Using TMP36 temperature sensor with Photon
#Using standard wiring, red -> +3.3V, black -> GND, blue -> A0
#Reading is taken from A0 and converted to a temperature reading
#Note we had to install package spyrk via pip install spyrk

#Here is how to access the Particle Cloud

#Should be able to call via the access token for the system
ACCESS_TOKEN = 'YOURTOKENHERE'
#Or can use username and password
USERNAME = 'YOURUSERNAME'
PASSWORD = 'YOURPASSWORD'

#To create a connection to Python Code
from spyrk import SparkCloud
spark = SparkCloud(USERNAME,PASSWORD)

#Other packages we will need
import sys #Used to break the script if device not connected
import time #Used for delays and to assign time codes to data readings
import numpy as np #Used for creating vectors, etc. 
import statistics as stat #Used for computing median, etc.
import matplotlib.pyplot as plt #For plotting
import csv #For writing data to a csv file

#First we will test the connection to the device and terminate the script if not connected
#If connected we alert the user and continue
if spark.YOURDEVICENAME.connected != True:
 sys.exit("Device Not Connected")
elif spark.YOURDEVICENAME.connected == True:
 print("Device Connected")

#Now we will open a file for the temperature data
with open('temp_data.csv', 'w', newline='', encoding='utf8') as csvfile:
 filewriter = csv.writer(csvfile, delimiter=',', quotechar = '|', quoting=csv.QUOTE_MINIMAL)
 filewriter.writerow(['Time','Temperature'])
 
#Now we construct a function that will read A0 and return temperature
#Note the user calls this function by passing read_length which is the number
#of samples the function will take. The temperature computed from the median of these samples is returned to
#the user. That is, this function applies basic median filtering to the measurement. 
def read_temperature_F(read_length):
 work_space = np.zeros(read_length) #Creates an empty vector of length read_length
 for i in range(0,read_length): #This for loop reads read_length number of samples and puts them in work_space
   A0=spark.YOURDEVICENAME.analogread('A0')
   temperature = (9/5)*((A0*3.3)/4095 - 0.5)*100 + 32
   work_space[i] = temperature
 temperature = stat.median(work_space) #Finds median of readings and returns median value
 return temperature

 
#Now we want to set up a basic data gathering and plotting system for temperature readings
#We'll decide how many samples we want to take and how long between samples. Then, we'll gather
#those samples with time data as well and plot the temperature versus time
samples = 30 #We're going to take this many data points
time_delay = 55 #We'll allow time_delay seconds to elapse between measurements
temperature_data = np.zeros(samples) #Creates a vector for our temperature data
time_data = np.zeros(samples) #Creates a vector of same length for time

#This loop does the measurements
for i in range(0,samples):
 temperature_data[i] = read_temperature_F(8)
 time_data[i] = time.clock()
 print("Sampled temperature is", temperature_data[i], "at time", time_data[i])
 with open('temp_data.csv', 'a', newline='', encoding='utf8') as csvfile:
   filewriter = csv.writer(csvfile, delimiter=',', quotechar = '|', quoting=csv.QUOTE_MINIMAL)
   filewriter.writerow([time_data[i],temperature_data[i]])
 time.sleep(time_delay)

#Now, we plot the results
plt.plot(time_data,temperature_data,'ro')
plt.axis([0,time_data[samples-1],60,80])
plt.xlabel("Time (seconds)")
plt.ylabel("Temperature (degrees F)")
plt.title("Temperature Data - Particle Photon and TMP36 Probe")
plt.show()
[/code]

Today was the last day of this year’s NSTA STEM Forum and bright and early tomorrow morning I’ll be headed back to Delaware. I want to thank NSTA for a great conference and I want to especially thank all of those who joined me for one of our two NCTM workshops on mathematical modeling. I really enjoyed working with all of you and hope to have that chance again in the near future. I’ll repeat the offer I made at the end of each workshop this week – please feel free to tweet or email anytime with your questions, thoughts, or comments about mathematical modeling! I love thinking about this stuff and love hearing from math and science teachers working to implement mathematical modeling in their classrooms.

At this week’s STEM Forum there were a massive number of fascinating sessions offered. So much so that it felt like for each session I attended, there were a half dozen that I had to miss that I really wish I could have attended! So, today, for those of you who couldn’t join us for our NCTM workshops, I’ll give a brief recap and provide the slides and a few other related materials. Along the way, I want to share and explore some of the excellent thoughts and ideas offered by participants during and after these workshops. I apologize in advance for not having the foresight to write down names! If you recognize yourself below, please drop me a line and I’ll correct this oversight. One more stylistic note before we begin – our two workshops (middle and high school) were essentially the same, with the math explored in the high school session being slightly deeper than in the middle school session. So, below, I’ll just say things like “in our workshop” or “Our workshop began” for simplicity.

For easy reference, here’s a PDF of slides from our workshop: NSTA_STEM_2016_Pelesko_HighSchool

We began the workshop by sharing a short story that appears at the start of the physicist Eugene Wigner’s famous talk “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” I won’t reproduce the story here, but if you follow the link, we shared the first paragraph of Wigner’s talk. We then spoke briefly about this idea, the idea that there is this incredible power, a power that verges on the mysterious or miraculous, afforded to us once we understand how to use mathematics to understand the natural world. We talked about this as the basis for answering the question “How do we put the M in STEM?” and we asked participants to share their experiences and their challenges with incorporating mathematics into STEM activities.

Participants shared some very real and very pressing concerns that we need to overcome if we’re to be successful in putting the M in STEM and if we’re to be successful in implementing STEM overall. These included overcoming the discomfort that many math teachers feel with science and many science teachers feel with math. One participant noted that to overcome this, we need to find time and space for science and math teachers to collaborate and to work together. Absolutely! Another participant noted that if you looked at the NGSS and the CCSSM, they are almost forcing this to happen. These are excellent points and I want to emphasize them here – if we are going to effectively teach students the art of mathematical modeling, it is crucial that we learn to work across math and science. Yes, one can do mathematical modeling using contexts that require little understanding of science, but the full power of mathematical modeling is only really unleashed when we’re involved in a deep scientific investigation of phenomena in the natural world.

Next, we spent some time talking about the overlap in the practice standards between NGSS and CCSSM. In particular, we talked about SMP #4 – Model with mathematics, from CCSSM and Practice Standard #2 – Developing and using models, from NGSS. We spent some time exploring how the NGSS standard encompasses the CCSSM standard, with mathematical models being one type of model talked about in NGSS. The key idea we explored was this relationship, the idea that mathematical models are really scientific models encoded in the language of mathematics.

Then, we spent some time looking at a STEM activity and where mathematical modeling fit into the picture. The activity we discussed was the Great Lakes problem that I’ve talked about here. Participants spent some time working to construct a mathematical model of the Great Lakes system and at the end, shared their results and their thinking.

I want to mention two things shared by participants that I thought were really cool. One participant from Montana, told me about the Berkeley Pit Mine in Montana:

BP_02_0

Apparently, this is an abandoned copper mine, huge in scale, that is slowly filling with water. The problem is that the water is incredibly contaminated and that eventually the water level will rise to the level of the local water table. When that happens, backflow will occur, contaminating local and regional water supplies. This is clearly a source of many wonderful STEM and mathematical problems. I’ll think about this one some more and I’m sure will post more about this one. Thanks for sharing!

Another problem shared by a participant, was one she does in her class. In this project, she has students read the book “The Immortal Life of Henrietta Lacks” and then explore changing concentrations of drugs in the body. As her experimental system, she has students start with a beaker filled with water and a certain amount of dissolved salt. They measure the mass of the system, then remove a small quantity of the “drug filled water” and replace it with an equal amount of clean water. Then, they measure the mass again and repeat. Tracking the mass measured each time, they uncover the curve that describes the changing concentration of the “drug” in the system. Mathematically, this is identical to the Great Lakes problem we explored in this session and a great example of the generalizability of mathematics. That is, we often discover that mathematical models we’ve built of one system are able to describe what we see happening in lots of systems. This happens when the underlying processes are the same, as they are here. Again, thanks for sharing!

Thanks again to everyone who participated this week! Please feel free to email or tweet anytime! Looking forward to more great conversations about the art of mathematical modeling.

John

 

 

This week, I’m in Denver Colorado for the 5th annual STEM Forum and Exposition organized by NSTA. Later this week, on behalf of NCTM, I’ll be running two short “NCTM sessions” on mathematical modeling. In these sessions, we’ll explore the question “What should the “M” in STEM look like in a good STEM activity?” Later this week, I’ll post more about these sessions and the STEM forum, but today, I want to talk about my plane ride.

No matter how often I fly, I find that looking out the window of an airplane never gets old. Given a choice, I always choose a window seat, even if it means seat 35F at the very back of the plane as it did yesterday. My choice was rewarded with a cloudless sky for most of the flight and I spent my time alternating between reading and looking out of the window. Now, if you fly from Philadelphia to Denver, or fly any other route that takes you across the mid-west you’ll see large swaths of the country that look like this:

wuir-centerpivot-aerial-large

Crop circles! Well, okay, that’s probably not the kind of crop circles you thought of when you read the title to this post. But, they’re still really cool and it’s fascinating to see the patterns laid down over thousand and thousands of acres of the United States. These crop circles are, of course, the result of what’s called “center pivot irrigation.” This is where a pumping system is built at the center of a circle, a long mobile arm of sprinklers is constructed, and this arm pivots around the central pump, irrigating a large circle of crops. If you’ve ever driven through regions where center pivot irrigation is used, you’ve likely seen the sprinkler arms:

center-pivot-sprinkler-image-300x172

Flying over thousands and thousands of these crop circles yesterday, I realized that there are all sorts of interesting mathematical modeling and generally more STEM questions that they present. Here are just a few that occurred to me:

The pivot arm is on wheels all along its length. At various points along this radius, motors of some sort drive the motion. Since this is a radial motion, the motors furthest out must be going faster than the motors closest to the pivot point. How is this motion coordinated? How does this need to increase the motor speed with radial distance impact the largest such circle that’s practical?

Since water is being forced from a central point, the water pressure must drop as we move outward along the radial arm. This means that if all sprinklers were identical along the arm, the crops closest to the pump would get over-watered, and those furthest out under-watered. How does one design a sprinkler system for this arm so that we deliver the same amount of water across the entire circle?

While the arrangement of these crop circles clearly must follow, to a certain degree, local topography, why are they generally arranged in a less-than-optimal packing arrangement? That is, we know that hexagonal packing of circles covers more area than the rectangular packing we observe. So, why do these circles generally follow the arrangement on the left rather than the right in the picture below?

CirclePacking_1000

Why are the circles that we see generally all the same size? For the most part there are only two sizes of circles one will see. Why does there appear to be a minimum circle size? We know that if we used circles of various sizes, we could cover more area as in this picture:

circle-packing-574x347

Why don’t farmers use circles across a greater range of sizes?

Now, as with any good mathematical modeling problem or any good STEM problem, I imagine that the answers to these questions are complex, and involve multiple factors and multiple constraints. As my flight neared Denver yesterday though, I decided to see if I could at least convince myself that there was a good reason why we don’t see small crop circles by sketching out a really simple mathematical model. This was a “back of the airplane menu” type model, but nonetheless, fun to play with and I thought I’d share it with you here today.

I wanted to see if there was an economic reason why we don’t see circles below a minimum size. That is, are circles below a certain size just not profitable? I assumed that there were three basic costs associated with constructing and running a single center pivot irrigation system:

    \begin{equation*} C_p = \text{Fixed cost of purchasing a pump} \end{equation*}

    \begin{equation*} C_m = \text{Cost of water used, proportional to square of the radius of the circle} \end{equation*}

    \begin{equation*} C_r = \text{Cost of pivot arm, proportional to radius of the circle} \end{equation*}

I also assumed that the revenue one would generate was proportional to the area of the circle:

    \begin{equation*} R = \text{Revenue, proportional to radius squared} \end{equation*}

Putting this all together meant that the profit, P, could be written as:

    \begin{equation*} P = R-C_p-C_m-C_r \end{equation*}

Using my assumptions of proportionality to the radius, r, I could rewrite this as:

    \begin{equation*} P = a_0 r^2 - C_p - a_1 r^2 - a_2 r \end{equation*}

Here, the a_i are positive constants of proportionality. A little rearrangement yields:

    \begin{equation*} P = (a_0-a_1) r^2 - a_2 r - C_p \end{equation*}

Now, a_0-a_1 must be positive, otherwise, the profit would always be negative and there would be no sense in ever having any sort of circle. Knowing that means the general shape of the profit curve as a function of r must look like:

IMG_0372

The fact that this curve becomes positive only at some finite positive value of r means that, yes, there is a minimum size below which a crop circle just isn’t profitable. I’m convinced that our farm industry knows what it’s doing when it avoids making teeny-tiny circles. In fact, it seems that we should be driven to make our circles as large as possible (note that real circles cover hundreds of acres) and that there must be another explanation for why we only make them up to a certain size. I suspect that the maximum size of theses circles is dictated either by the demands of designing for the pressure drop across the sprinkler array or the demands of increasing speed of the motors as we move further out along the arm. Or, probably both.

But, it was time to ask my neighbors to get up one last time so I could use the bathroom and then, quickly, my plane ride to Denver was over. I hope you’ve enjoyed my random flight musings and hope perhaps you’ve found some inspiration for some cool STEM problems here in this post. I’ll be sure to post again at least once more this week and share what I learn at this year’s STEM Forum. Till later.

John

 

This post was inspired by a recent twitter conversation between @woutgeo, @ddmeyer, @cheesemonkeysf (cool handle!), and myself. The 120-character at a time conversation revolved around a comment from @woutgeo:

“…am still mulling whether modeling Q’s can have correct, known answers”

Since this seems to be a common point of confusion, or even contention, I thought I’d talk about this idea a little today. That is, what do people mean when they make statements like “Modeling problems don’t have a single, unique, correct answer.”? If you read the introductory chapter to the new Annual Perspectives in Mathematics Education (APME) volume, Mathematical Modeling and Modeling Mathematics, you’ll find versions of this statement in several places. I’ll highlight two:

There are multiple paths open to the mathematical modeler, and no one, clear, unique approach or answer.

Mathematical modeling authentically connects to the real world, starting with ill-defined, often messy real-world problems, with no unique correct answer.

What do people, in particular, what do those of us who do mathematical modeling professionally, mean by such statements? What are we really trying to say with statements like “no unique correct answer”? There are actually multiple levels to this point, and I think it is worthwhile exploring a few of these levels here today.

At the simplest level, such statements express the point that the answer to a modeling problem is not like the answer to a typical textbook or classroom math problem. When we think of the idea of an “answer” to a math problem, due to many years of repetitive training, what we most often visualize is a number. “The” answer is 586, or 7, or 2+3i, or \sqrt{17}, or something like that. Perhaps, if we’re a bit more deeply immersed in algebra, or trigonometry, or calculus, our default vision of “answer” might be more like x^3 or \sin(3 \theta) or \frac{1}{x} or some such expression. Note that this default vision of an “answer” is some form of mathematical object and tied to that, perhaps so intimately that we don’t see it, is the idea that this answer is easily checked. It’s the result of “doing the math correctly,” and hence, of course, we should only get one such answer. But, when we talk about the answer to a modeling problem, these are not the types of objects we’re talking about. In one very important sense, the answer to a modeling problem is a model. And, here is the first place where this idea of “no unique answer” comes into play. Because models of a given real-world situation can be constructed using wildly different mathematical tools and are based on assumptions made by the modeler, it is often the case, likely even, that two modelers approaching the same problem produce different models, i.e., different “answers” to the same modeling problem. This is the point that the first of the two statements from the APME volume above is really making.

But, there is another level to this idea of “no unique answer” that’s worth exploring. The second statement from the APME volume mentioned above points to this next level. Let’s examine the statement again:

Mathematical modeling authentically connects to the real world, starting with ill-defined, often messy real-world problems, with no unique correct answer.

Here, note that “answer” is not referring to the answer to some modeling problem in the sense discussed above, but is referring to the “answer” to a real-world problem that the modeler is trying to address. Here, the notion of “no unique answer” points to the messiness and inherent uncertainty of the real-world. Because we can never hope to capture all of that messiness or tame all of that uncertainty, our models always remain provisional, approximate, and open to improvement. That is, we obtain “an answer” to our problem, and we evaluate whether or not it is good enough for our purposes, but we never get “the answer.”

Another way to see this is to always keep in mind that mathematical modeling is ultimately, a process. Hopefully, it’s a process that draws us closer and closer to the truth, but like an asymptote, never quite gets there. You can see this point of view and get a sense of the notion that there isn’t one right “answer” or model, but rather a never-ending array of possible “answers” or models, in the CCSS for mathematics:

In situations like these, the models devised depend on a number of factors: How precise an answer do we want or need? What aspects of the situation do we most need to understand, control, or optimize? What resources of time and tools do we have? The range of models that we can create and analyze is also constrained by the limitations of our mathematical, statistical, and technical skills, and our ability to recognize significant variables and relationships among them. 

There is a wonderful one-paragraph story by Jorge Luis Borges that is related to this second point. It’s called “On Exactitude in Science” and in this story, Borges explores the idea of modeling and uses absurdity to remind us that useful models are always incomplete. His story is short enough to reproduce here:

In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

The point, of course, that Borges is making is that a big part of science, a big part of understanding the real-world, is about making models and that the only perfectly complete model of reality is reality itself, but that such a model is also completely useless! This is true whether we’re talking about physical models like maps or more abstract mathematical models. The magic is that maps and models, approximate, imperfect, and ignoring vast parts of reality are incredibly useful and are our best tools for understanding, predicting, and controlling that reality.

John

 

Just spent a great week at NCSM 2016 in Oakland, CA. Wish we could have stuck around for NCTM 2016 across the bay, but sadly, not in the cards this year. Today, I thought I’d share a bit about our session on mathematical modeling for those who couldn’t be there in person.
First, let me share links to a few things. A PDF of our slides from the session can be found here.
We also left the attendees with two handouts that can be found here.
The first handout outlines the key features of mathematical modeling and the key features of the QFT, which I’ll talk about below. The second is a useful worksheet for helping teachers and students engage in meta-thinking about their mathematical modeling work.
During the opening of our session, Michelle strove to make the following point – it’s not surprising that teachers in the United States are finding the implementation of SMP #4 to be a challenge. She shared the following table which highlights the very small fraction of teacher preparation programs that provide their students with any training in mathematical modeling.
 Picture2(Newton, Maeda, Alexander, Senk, 2014, Notices of the AMS)
For teachers graduating from one of the 85% of programs that provide no training in mathematical modeling, preparing to teach SMP #4 is like preparing to teach geometry, never having had a course in geometry. In fact, it’s a bit more difficult, since even if someone somehow missed out on a college level course in geometry, they still would have seen it in high school. With modeling, we face the situation where the vast majority of teachers have never seen mathematical modeling at any level in their education. In addition to this issue, Michelle also discussed the huge deficits in secondary mathematics curricula related to mathematical modeling and made the point that a long history of embedding math tasks in pseudo-contexts has left students unprepared to deal with real real-world situations in the math classroom.
In the next part of our session, I spent some time discussing the points we discussed in one of our previous posts. Since at least one of the attendees found a new analogy we used particularly compelling, let me elaborate on that point here (thank @mary_davis_utdc!). When talking about the important fact that mathematical modeling is an iterative process, we discussed the question as to what exactly drives that iteration. For an analogy, I relied on one my favorite things, the lazy river. A special thanks to @mary_davis_utdc for tweeting us this picture after the session!
Cf8YwoZUYAAyq9e
In a lazy river, you’re just drifting around and around a river in a closed loop, that is in a cycle. But in every such river, at some point along the cycle, you’ll find there are jets that propel the water in a particular direction and keep the cycle moving. In mathematical modeling it’s the “Validate” step that serves as the “jets” of the modeling cycle. Remember, the “Validate” step is where you take the model that you’ve formulated and analyzed and compare its predictions or insight back to the real world. To the extent that your model’s predictions or explanation differs from what you see in the real world, you’re propelled (jetted?) back around the modeling cycle, back to the formulate stage, back around the lazy river of mathematical modeling. If this isn’t occurring, if you’re modeling activities aren’t being driven naturally around and around by this validation step, you’re not really doing mathematical modeling.
In the final part of our session, Michelle introduced a pedagogical tool that we’ve found particularly useful when engaging teachers and students in mathematical modeling. This is a tool called the “Question Formulation Technique” or “QFT” for short. It was developed by Rothstein and Santana over the course of many years and has been used in an incredibly wide variety of settings. I encourage you to visit www.rightquestion.org to learn more, or better yet, read their really excellent book on the topic:
 Picture3
Over the past several years, we’ve worked to find effective ways to incorporate the QFT into the teaching of mathematical modeling. The genesis of this idea was my stubborn insistence on defining mathematical modeling as “the art of asking good questions” and Michelle’s equally stubborn insistence on saying “What the hell does that mean?” As we thought about this carefully, we gradually realized that students often struggle with mathematical modeling in the same way that they struggle with mathematical proof – they’re stuck at the beginning, stuck at “Where do I start?” With modeling, clearly defining the questions your’re trying to answer, learning to identify the types of questions that modeling can answer, and identifying the questions you need to answer in order to build a model are all crucial steps. These occur right smack at the start of the process, somewhere within that “Problem” box and along the way to that “Formulate” step. What we’ve found is that using the QFT at the start of the task, or strategically at points along the modeling cycle, is a really good way to get students to think deeply about these questions, own these questions, and be motivated to answer these questions.
If you’d like to read a little bit more about our work with the QFT and mathematical modeling, here’s a draft article we’ve been working on.
My guess is that this isn’t going to make it much past the draft stage, as we’re now shifting to work on our new book on mathematical modeling. Ah, that’s a perfect segue to what was the biggest highlight of the week for us – signing a contract with Math Solutions to publish this book. It’s tentatively titled “Model with Mathematics” and we’ll keep you posted on progress here. We’re both very excited to be working with the Math Solutions team on this project and looking forward to sharing more of what we’ve learned about learning and teaching mathematical modeling with the community. So, stay tuned!
Finally, just wanted to say a special thanks to everyone who attended our session. What a great crowd! As always, please feel free to contact us with any follow-up questions or comments. We’d love to hear from you.
John

If there were a top-ten list of “things that make math teachers cringe,” the question “When will we ever use this?” would surely be at the top. That’s pretty independent of whether you teach at the elementary grades, middle school, high school, or college. Quite rationally I suppose, most students want to know there is some utility in what they’re learning, that this lesson is not just another “eat your spinach, it’s good for you” type of lesson, but is something they’ll be able to see as relevant to their own lives and their own careers.

One of the nice things about teaching mathematical modeling is that it’s incredibly relevant in a wide variety of contexts and to people working in a tremendous variety of fields. As I read the news each day, I keep an eye out for neat places where mathematical modeling shows up and today, I thought I’d share a few recent ones with you.

One of the coolest is the recent discovery of evidence for the existence of a ninth planet (poor Pluto!) in our solar system. This discovery, announced by Caltech researchers in January of this year relies entirely on indirect evidence provided by a mathematical model. In this case, no one has actually seen the ninth planet, all of the evidence comes from observations of objects in what is known as the Kupier Belt. These objects are moving in ways that just don’t make sense…unless there is some other very large mass out there as well. By constructing a mathematical model of how these objects should move and inserting an unknown large mass into the model, the Caltech team has shown that the most likely explanation for the motions that are observed is the presence of something that isn’t observed, i.e. a ninth planet. How cool is that? Note that the reasoning of the Caltech team is exactly the same as the reasoning we’ve been discussing here. They observed a pattern, they sought to explain that pattern, they made a hypothesis about what could be causing that pattern, they built a mathematical model incorporating that hypothesis and showed that the model predicted the observed pattern, and hence can claim that the probability that their hypothesis is true is now very high. In this case their hypothesis happens to be the very exciting one that a previously undiscovered planet exists!

In an entirely different direction, a team from the University of Aberdeen recently built a mathematical model that explains how things go viral. In this case, the team wanted to understand how things like the Macarena could suddenly become wildly popular, or how “Numa Numa” could garner more than two million views on YouTube in just three months, or more importantly how social movements, ideas, or products could catch on or fail to do so. Here, the team borrowed from mathematical models used in epidemiology, similar to those we explored in “Pictures and Stories,” and  added in the effects of acquaintances, such as those we maintain through social media, to construct a new model that could examine the spread of ideas. The Aberdeen team showed that while an individual’s resistance to the spread of a “contagion” might be high, when bombarded by that contagion from many directions, such as happens through FaceBook or Twitter, transmission occurs, i.e. you go view Numa Numa as well. That synergy leads to explosive transmission and we say that something has gone “viral.” This is not only a wonderful example of the use of mathematical modeling to explain a real-world phenomenon, but also a wonderful example of the generalizability of mathematics and mathematical models. The same mathematics and the same types of mathematical models that can be used to study the spread of Ebola here have been used to study the spread of ideas.

Another example that caught my eye recently was work that appeared in PLOS One, where researchers investigated the impact of deploying a test for a type of drug-resistant tuberculosis. The question here was whether or not having a test that detected the particular drug-resistant strain, in addition to existing tests for TB and another type of drug-resistant strain, would impact the spread of TB throughout a population. Knowing the answer to this question allows researchers to effectively direct their time and resources. If this third test would help contain the spread of TB, it would be worthwhile, but if it didn’t, that time and money could be more usefully directed toward something that would actually save lives. The answer here, arrived at through extensive mathematical modeling, was surprising. The additional test did nothing to impact the spread and hence is not worth developing or deploying. As you might imagine, this is the type of question that not only can be answered by a mathematical model, but can only be answered by a mathematical model.

Discovering new planets, explaining the spread of viral videos, and determining where to invest time and money in medicine, are just three very recent examples of mathematical models and mathematical modeling impacting lives and people around the world. I encourage you to keep an eye on the news; I’m certain you’ll quickly collect your own stable of such stories to share with your students when they ask you “When will we ever use this?”

John

 

 

 

 

If you spend any time reading the literature on mathematical modeling, you’ll quickly encounter some version of the phrase:

We build mathematical models of the real world in order to explain, predict, or control. 

Usually, you’ll find such language as part of a definition of mathematical modeling or of a fairly high-level description of the process. But often, this language isn’t revisited when examples of mathematical modeling are described. This language is often left by the wayside and it’s up to the reader to puzzle out what a given mathematical model was intended to do. Once you have some experience, this isn’t hard, but for the novice, it can be confusing and since the whole point of mathematical modeling is to accomplish one or more of these three goals, it is important for the new modeler to develop some facility with these concepts. So, today, I thought we’d explore the ideas of “explain, predict, or control,” and give concrete examples of each case. Hopefully this will help clarify these purposes of mathematical modeling in your mind and give you a framework for thinking about the purposes of your modeling activities in new cases.

Let’s start with the notion of constructing a mathematical model to explain. This is the case that is most clearly and directly bound together with the scientific process. As our example, let’s return to our investigation of “Fairy Circles” that we discussed here and here. Recall that what we were trying to understand was the origin of so-called “Fairy Circles,” or large, circular, regular clearings in the desert:

Fairy circles

Fairy circles

The observation was that these were roughly circular and roughly uniform in size. There was no apparent reason for their appearance, hence the invoking of “fairies” as an explanation. Now, this is pretty clearly a situation that calls out for a model that explains. We could ultimately think about a model that predicts their appearance and shape, but it is hard to get excited about a model along those lines that doesn’t also explain. Now, it is important to make sure we understand how mathematical modeling works when we’re seeking to explain. This, again, is where we connect deeply to science. We can imagine that there are lots of different possible mechanisms that would lead to the presence of these circles. That is, we can hypothesize many explanations for the presence of these circles. But, how do we test these hypotheses? An experimental test, in this case, is very difficult to conceive of and likely very expensive and time consuming. That’s a perfect situation in which to bring out the tool of mathematical modeling. Instead of an experiment, what we do is take our hypothesis and build a mathematical model of the purported mechanism for the creation of the circles. We then analyze our model to see if in fact the mathematization of that hypothesis leads to a model that predicts what we observe. If it does, that increases the odds that our hypothesis is correct. Note, it’s not a proof of our hypothesis! It only increases the probability that the hypothesis is correct and gives us more confidence that we have found the likely explanation.

Next, let’s think about what a mathematical model looks like when we want to predict. For this one, let’s return to our discussion of the “tipping bucket/water park” system. Recall that here, we had a bucket held slightly off-center on an axle with water flowing into the bucket.

jungle-jims5

Periodically, the bucket would become unstable, tip, spill the contained water, and then return to the upright position and repeat. In this case, there is less mystery and less of a calling for an explanation. We can see that the system is mechanically driven, we have intuition about the changing stability, and we can see how it empties and resets. Here, what we want is to be able to predict the period of oscillation, given, for example, the physical properties of the system. That is, if we know the rate at which water flows into the bucket, the mass of the bucket, the volume and shape of the bucket, and the location of the axle, we’d like to be able to predict the interval of time in between tipping of the bucket. We’d like to be able to make this prediction for any given set of parameters for the bucket and perhaps be able to use this to design buckets that tip at different intervals. In this case, to accomplish this, we still make a hypothesis, namely that Newton’s Laws of Mechanics are governing the behavior of the system. But, it’s not really a hypothesis that we’re testing. Here, we have tremendous confidence in the hypothesis a priori. Rather, we’re accepting this hypothesis, mathematizing, and using the results of our analysis to make predictions about the world for systems we haven’t seen yet. Yes, we must still compare the results of our model to those of our observed system to validate. But here, the validation is more along the lines of making sure we did the mechanics and the mathematics correctly and less along the lines of testing the hypothesis that Newton’s Laws apply.

Finally, let’s think about the notion of using a mathematical model to control some real-world system. Let’s note that for both of the systems considered above, we can think about using the models we develop for control in some sense. In the case of Fairy Circles, once we’ve fully explained, we can look for parameters in our model that we can change in the real-world and that would produce different size circles, or perhaps, no circles at all. In the case of the tipping bucket, we can imagine changing the size of the bucket or the rate of the water flow and since we can predict how the period of oscillation will change, this gives us control over the system. These are certainly both examples of how we can use a mathematical model to control. But, I want to give you one other example that perhaps more clearly highlights this idea of using a mathematical model for control. Take a moment and watch this video clip:

This video shows the outcome of a project focused on implementing a classic feedback-control system for an “inverted pendulum.” The basic idea is simple and you can try it right now. Take a pencil and balance it by its point on your hand. You’ll likely almost automatically be able to move your hand to keep the pencil upright. The feedback you receive, visual and tactile, about the pencil’s position lets you rapidly adjust the motion of your hand to keep the pencil upright. Now, if you want to build a machine to do this, here’s where you need a mathematical model for control. Here, as with the tipping buckets, you would build a mathematical model based on the laws of mechanics. This model would tell you how your pendulum should move. But, in this model, you would build in an unspecified applied force to the system. Here, this unspecified part would be the motion of the base of the pendulum. By analyzing your model, you would then determine precisely how the base should move to maintain the upright position of the pendulum. This is then what you’d tell your machine to implement. That is, your mathematical model will tell you that if the pendulum has such and such a position and is moving and such and such a rate, you should move your cart in such a manner. That’s instructions that a machine can follow and you’ve now used a mathematical model to control a system in real time. In addition to the ways in which might use a model to control illustrated by the Fairy Circles and the tipping buckets, it’s useful to have this real-time, programmable sense in mind as well.

Hopefully these examples give you a clearer sense of the differences and similarities between building a mathematical model to explain, predict, or control. I encourage you to think through the goals of your models as you build them and as you first encounter a new modeling situation. In upcoming posts, I’ll work to demonstrate each of these three purposes in even more detail with further examples.

John

 

It’s winter in Delaware and while so far, it’s been a mild winter, today, we’re in the midst of a pretty good blizzard. Nonetheless, this past week I started planning my garden for the spring and thought some thoughts of spring might be nice to share today, especially for those of us on the east coast.

Last year was our first summer in our new house and so we just had a small vegetable garden in a flower bed along the back of the house. Now that I have a better sense of light and shadow, I’m ready to start planning out a small, but larger bed for this year. And, this past week, I got to thinking – Why not make it a smart garden?  That is, why not make it a garden that feeds me data about temperature, water content in the soil, hours of direct sunlight, and deer. Perhaps, I thought, I could make it self-watering and responsive to all that data. So, I enlisted the aid of my 14-year old son and we did some serious brainstorming about what we’d like in a smart garden.

We dreamed up a garden that not only would return us real-time data to our phones and would automatically water itself, but that also would detect and scare off predatory deer and other such annoying creatures. We talked about the garden regulating its own temperature and automatically picking ripe tomatoes, washing them, and putting them on the kitchen counter… then, we decided to start, well, more simply.

This led us to think about sensors, and measurement, and a little bit about the mathematical models that underlie sensing. I thought I’d share some of this thinking with you today.

We decided that the most important thing we needed to measure was the moisture level in the soil. Aside from weeding, which I don’t know how to automate, watering is the next most time consuming task, and one I think we can automate. While I generally don’t mind watering the garden, I do get distracted and forget sometimes, and that’s just plain not good for tomatoes. So, the question became – how do we know how much moisture there is in the soil?

Let’s look first at the basic proof of concept demonstration that we rigged up, and then we’ll talk a little bit about indirect evidence and mathematical modeling. We used some wire, a sponge, a resistor, an LED, and a power supply, and built the following basic moisture sensor:

IMG_0242

The two red wires sticking out of the sponge have about 2 inches of insulation stripped off their ends. The wires aren’t connected, rather, these two bare ends are just stuck into the sponge. If the two wires were connected, the circuit would be complete, and the LED would light up. With the two wires held a distance apart in the sponge, the circuit won’t be complete at all when the sponge is fully dry, and will exhibit various levels of resistance depending on how wet the sponge is, for a wet sponge. This means the LED will light up with differing levels of brightness depending on how much water there is in the sponge. Voila! Moisture sensor.

Now, while this is pretty crude, it illustrates the basic idea behind lots and lots of different types of sensors. The idea is built on inference from indirect evidence. We can’t directly look and see how much water is in the sponge, so we set up something we can see that changes with how much water is in the sponge, and look at that instead. If you get this, you get the idea behind lots of sensors – if you can’t look at X, look at something that you can see that changes with X and infer how X much be changing.

Now, notice, this means that we need to know the functional relationship between the resistivity of the sponge and its moisture level. We’re measuring resistivity, in this case from the brightness of the LED, and we want to infer what that means about moisture levels. That is, whether or not we can make our sensor useful or not depends on whether or not we have a mathematical model of how the resistivity of this system changes with moisture. We can think about this model very simply as this functional relationship:

(1)   \begin{equation*} R = F(M) \end{equation*}

Here, R is the resistivity and M is the moisture content of the soil. In an ideal world, this would be a simple proportional relationship, we’d look-up or measure the constant of proportionality, and we’d have a simple and useful way to determine moisture. But, of course, as in all things we want to model in the real world, things aren’t quite as simple as the ideal case! Perhaps the biggest complicating factor is that soil resistivity varies with other things that are likely to be changing in our system. In particular, it varies with temperature and with the ionic content of the water. As in any modeling problem, the question then become whether or not these impact the functional relationship we’re after to a degree that impacts this relationship in a meaningful way. Here, it may be safe to assume that ionic content doesn’t vary much over the life of the garden and we can ignore it. Temperature, on the other hand, does vary a lot, and it turns out that on the scale on which we’re trying to measure this functional relationship, the variation of temperature has a significant effect on F. So, we really need to be thinking about a mathematical model of the form:

(2)   \begin{equation*} R = F(M,T) \end{equation*}

Here, T is temperature. Now, there are two paths one could take to obtain F and these nicely correlate with the notion of descriptive and analytic models that we’ve explored before in this space. In this context, a descriptive model of the situation is often called an empirical model. You can imagine that we could obtain F by designing a set of experiments. For example, we could take a sample of soil and thoroughly dry it out in the oven, let it come to some known temperature, and then measure R. We then add known quantities of water to this known volume of soil and measure R along the way. We then repeat this for a bunch of different temperatures, T. In this way, we build up a data set to which we can fit a curve that becomes our model, F. In situations like this, and for moisture sensors, this is the general approach. An alternative, analytic path, would involve computing the resistance from first principles. That would mean we’d need a good model for the soil composition as a function of space, the resistivity of each of the constituents of the soil, the flow of water through the soil, and how all of these things respond to temperature. In this case, the complexity of such a model and the work involved in the analysis isn’t likely to be worth the payoff. It might be interesting and I’d bet that someone, somewhere has done it, but if all we’re interested in is calibrating and using our sensor, I’d go with the empirical model here.

Now, this train of thought got me to thinking about a lot of the STEM activities I’ve seen used in K-12 and how often, these involve measurement of one form or another. I’d like to encourage you, when you think about such activities, to think about the measuring you’re doing a little more deeply than you might be used to. For those very many cases where you’re doing some sort of “can’t see X, so I’ll measure something I can see that varies with X,” it’s probably worth taking a moment and having your students think through the mathematical model you’re relying upon to make that work. Hopefully this will give your students a deeper appreciation of another aspect of the interplay between S, T, E, and M. And, if your students get real ambitious and want to come design and build my new smart garden, let me know. Always open to help!

John

Over the last several months, we’ve explored mathematical modeling from a variety of perspectives here in this space. We’ve talked about the CCSSM, the modeling cycle, thought tools used by modelers, and we’ve explored a variety of examples, all with the goal of trying to understand how mathematical modelers think and how we might best train our students in the art of mathematical modeling.

One thing we have not done is follow the path of a typical course in mathematical modeling. Today, I want to talk a bit about what such courses look like, and where we might benefit from typical paths and why we might want to deviate from typical paths. Until recently, at least in the United States, mathematical modeling courses have largely been restricted to higher education. They’ve been taught primarily at the advanced undergraduate level and the graduate level. There are exceptions and there have been efforts to teach such courses at the introductory undergraduate level, but it’s reasonable to claim that the majority of the effort around teaching mathematical modeling has primarily occurred at the upper undergraduate and graduate levels.

This means that the majority of textbooks focusing on mathematical modeling are also aimed at the upper undergraduate and graduate levels. If you search for “mathematical modeling” on Amazon, you’ll, in fact, find several thousand books along these lines. Some are more specialized, some are less specialized, but generically, they are aimed at this advanced audience. If you examine the most popular of these, the ones that focus on “mathematical modeling” rather than “mathematical modeling in X,” you’ll find they all follow a typical pattern and that this pattern is reflected in course syllabi at institutions across the country. Roughly speaking the pattern is one where the texts are organized by mathematical topic first, and application areas within those topics. That is, you’ll see them divided into chapters with titles like “Modeling with difference equations,” “Modeling with ordinary differential equations,” “Modeling with partial differential equations,” and so on. Within each chapter, the authors will introduce an application area, and then show the reader a bunch of models developed in that application area using the mathematics of the chapter heading.

The philosophy behind this approach is one of “modeling can’t be taught, but it can be caught.” That is, authors and instructors largely assume that if they show students enough examples of mathematical models, they’ll eventually catch on and know how to model. I freely admit, that I too was a member of this camp for a long time. After all, it had worked for me! But, in teaching mathematical modeling over the last twenty some-odd years, and especially as I’ve had the opportunity to work more closely with the K-12 community, I found myself growing increasingly frustrated and disillusioned with this approach. In the large majority of cases, it became apparent to me that while a handful of students did “catch it,” the vast majority did not. They could perhaps use models they’d seen before, and engage in somewhat trivial extensions of such models, but when faced with a new situation, they were lost. They hadn’t, by and large, learned how to actually model. They hadn’t become modelers.

I gradually realized that I don’t agree with the “can’t be taught, but can be caught” philosophy and many of the blog posts here have been inspired by my personal change in perspective on this issue. I believe that we can, in fact, deconstruct and understand the process that constitutes mathematical modeling, articulate this process more clearly for our students, design activities to engage students in essential competencies that comprise this process, and end up teaching a lot more students how to actually model. My colleague, Rachel Levy at Harvey Mudd, says that what we’ve been doing for a very long time is teaching “model appreciation” rather than teaching the art of mathematical modeling. I think there is a lot of wisdom in that statement and that we can make this shift, at all levels, from teaching model appreciation to teaching the art of mathematical modeling. As I’ve had the chance to explore the mathematics education literature on this topic, especially the international literature, I’ve learned that there has been progress made in this regard and I’ve worked to incorporate the best of these ideas into my own teaching and try and share some of them here.

But, in the back of my head, there is also this whole “baby with the bathwater” thought that’s been nagging me for a long time. In an earlier blog post we explored the notion of “thought tools.” These are those ways of thinking that are often widely applicable and are characteristically wielded by those who engage at a high level in a particular practice. We talked a bit about some of the thought tools wielded by mathematical modelers. This thought tool perspective pushed me into doing a lot of meta-thinking – trying to observe my own thought processes as a mathematical modeler and compare them with those of students just learning the art.

One thought tool that I find myself wielding frequently and have observed other mathematical modelers utilize as well is “analogical thinking.” That is, when faced with some unfamiliar modeling situation, a modeler will often start by arguing “well, this situation is kind of like situation X, so perhaps we can proceed as follows…” They have, in their head, a repository of modeling approaches for a wide variety of problems. They rely on the miraculous fact that mathematics and mathematical approaches to applied problems are often wonderfully generalizable, and so develop solutions to new problems relying at first on what they’ve seen work before. The uniqueness of each situation requires adaptation and creativity, but they can often grasp a ready starting point from which progress can be made.

That brings us right back to model appreciation, because after all, where did they get this broad experience with various modeling approaches? Well, they’ve looked at and worked through lots of models built by others to tackle lots of different situations. Seems they caught something useful somewhere along the line. The seemingly presents us with a dilemma. The vast majority of students don’t catch modeling by the model appreciation approach, but the experience gained via model appreciation seems essential to be a good modeler. I don’t think this actually a dilemma. I think that we should conclude that we need to engage students in both some model appreciation and some deconstruction and practice of the art and the process of thinking like a mathematical modeler. I believe, and can claim from my own personal experience, that this dual-approach, is a more successful approach.

So, as you think through and work to incorporate mathematical modeling into your classroom, I suggest you keep this dual approach in mind. Spend time giving students the opportunity to practice modeling on new and fresh situations, help them understand the process and the practice, pay attention to developing the thought tools they’ll need and the competencies they must master, but, from time to time, don’t be afraid to engage in a little model appreciation and help your students start to build their own “model repositories” in their heads.

John