Math – Education Next https://www.educationnext.org A Journal of Opinion and Research About Education Policy Thu, 25 May 2023 13:41:07 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 https://www.educationnext.org/wp-content/uploads/2020/06/e-logo-1.png Math – Education Next https://www.educationnext.org 32 32 181792879 Stanford Summer Math Camp Defense Doesn’t Add Up, Either https://www.educationnext.org/stanford-summer-math-camp-defense-doesnt-add-up-either/ Wed, 24 May 2023 13:13:26 +0000 https://www.educationnext.org/?p=49716680 Flawed, non-causal research that the proposed California framework embraces

The post Stanford Summer Math Camp Defense Doesn’t Add Up, Either appeared first on Education Next.

]]>
Photo of Stanford University
Stanford University was the site of a summer math camp whose outcomes were studied.

I thank Jack Dieckmann for reading my critique of the proposed California State Math Framework (“California’s New Math Framework Doesn’t Add Up”) and for writing a response (“Stanford Summer Math Camp Researchers Defend Study”). In the article, I point to scores of studies cited by What Works Clearinghouse Practice Guides as examples of high-quality research that the framework ignores. I also mention and two studies of Youcubed-designed math summer camps as examples of flawed, non-causal research that the proposed California State Math Framework embraces.

I focused on outcomes measured by how students performed on four tasks created by the Mathematical Assessment Research Service. Based on MARS data, Youcubed claims that students gained 2.8 years of math learning by attending its first 18-day summer camp in 2015. Dieckmann defends MARS as being “well-respected” and having a “rich legacy,” but he offers no psychometric data to support assessing students with the same four MARS tasks pre- and post-camp and converting gains into years of learning. Test-retest using the same instrument within such a short period of time is rarely good practice. And lacking a comparison or control group prevents the authors from making credible causal inferences from the scores.

Is there evidence that MARS tasks should not be used to measure the camps’ learning gains? Yes, quite a bit. The MARS website includes the following warning: “Note: please bear in mind that these materials are still in draft and unpolished form.” Later that point is reiterated, “Note: please bear in mind that these prototype materials need some further trialing before inclusion in a high-stakes test.” I searched the list of assessments covered in the latest edition of the Buros Center’s Mental Measurements Yearbook, regarded as the encyclopedia of cognitive tests, and could find no entry for MARS. Finally, Evidence for ESSA and What Works Clearinghouse are the two main repositories for high quality program evaluations and studies of education interventions. I searched both sites and found no studies using MARS.

The burden of proof is on any study using four MARS tasks to measure achievement gains to justify choosing that particular instrument for that particular purpose.

Dieckmann is correct that I did not discuss the analysis of change in math grades, even though a comparison group was selected using a matching algorithm. The national camp study compared the change in pre- and post-camp math grades, converted to a 4-point scale, of camp participants and matched non-participants. One reason not to take the “math GPA data” seriously is that grades are missing for more than one-third of camp participants (36%). Moreover, baseline statistics on math grades are not presented for treatment and comparison groups. Equivalence of the two groups’ GPAs before the camps cannot be verified.

Let’s give the benefit of doubt and assume the two groups had similar pre-camp grades. Are post-camp grade differences meaningful? The paper states, “On average, students who attended camp had a math GPA that was 0.16 points higher than similar non-attendees.” In a real-world sense, that’s not very impressive on a four-point scale. We learn in the narrative that special education students made larger gains than non-special education students. Non-special education students’ one-tenth of a GPA point gain is underwhelming.

Moreover, as reported in Table 5, camp dosage, as measured in hours of instruction, is inversely related to math GPA. More instruction is associated with less impact on GPA. When camps are grouped into three levels of instructional hours (low, medium, and high dosage), effects decline from low (0.27) to medium (0.09) to high (0.04) dosage. This is precisely the opposite of the pattern of changes reported for the MARS outcome—and the opposite of what one would expect if increased exposure to the camps boosted math grades.

The proposed California Math Framework relies on Youcubed for its philosophical outlook on K-12 mathematics: encouraging how the subject should be taught, defining its most important curricular topics, providing guidance on how schools should organize students into different coursework, and recommending the best way of measuring the mathematics that students learn. With the research it cites as compelling and the research it ignores as inconsequential, the framework also sets a standard for what it sees as empirical evidence that educators should follow in making the crucial daily decisions that shape teaching and learning.

It’s astonishing that California’s K-12 math policy is poised to take the wrong road on so many important aspects of education.

Tom Loveless, a former 6th-grade teacher and Harvard public policy professor, is an expert on student achievement, education policy, and reform in K–12 schools. He also was a member of the National Math Advisory Panel and U.S. representative to the General Assembly, International Association for the Evaluation of Educational Achievement, 2004–2012.

The post Stanford Summer Math Camp Defense Doesn’t Add Up, Either appeared first on Education Next.

]]>
49716680
Stanford Summer Math Camp Researchers Defend Study https://www.educationnext.org/stanford-summer-math-camp-researchers-defend-study/ Tue, 23 May 2023 20:15:59 +0000 https://www.educationnext.org/?p=49716677 Critique of California math framework draws a response

The post Stanford Summer Math Camp Researchers Defend Study appeared first on Education Next.

]]>
Stanford University
Stanford University was the site of a summer math camp whose outcomes were studied.

To the Editors:

Tom Loveless’s analysis piece on the California Math Framework (California’s New Math Framework Doesn’t Add Up) was thick with criticism of the framework.

The framework was the product of a careful, considered, laborious, and collaborative legislative process, and it has widespread public support. In fact, the signatories on the support petition, from STEM professionals, educators, and 66 organizations, outnumber those in the oppositional petition Loveless cites.

We are in particular concerned over his critique of the study of Youcubed summer camps cited in the framework.

The first Youcubed summer camp was conducted at Stanford in 2015. This first session resulted in achievement gains equivalent to 2.8 years of school. A video of the students participating in that very first camp can be seen here.

As Loveless notes, a later study of camps conducted in 10 districts across the U.S., with an average achievement gain equivalent to 1.6 years of school.

Participating school districts invested countless hours organizing and running their camps, providing students with new ways and new outlooks through which they might engage with mathematics. The camps changed students’ learning and their levels of enthusiasm for mathematics the further they progressed through each camp program. These school districts supplied all of their research data to Youcubed at Stanford, and their efforts, as well as the immeasurable accomplishments of their students, should be lauded, rather than torn to shreds.

The Loveless piece describes the study of these camps as an “in-house” study. The truer description is that Stanford University researchers studied camps conducted by others across the U.S. The statistics produced were vetted by external evaluators, and the resulting journal article was peer reviewed by Frontiers in Education, a scientific journal.

Loveless argues further that there was no control group for the study’s analysis of test-score outcomes. He fails to mention the main result of the study – that the achievement of the students attending the camps was evaluated through analysis of their math GPAs in their following school year, compared to a control group. The result of this was that the students attending Youcubed camps achieving significantly higher math GPAs. This result is based on a quasi-experimental design in which students who attended the camps were statistically matched with students who demonstrated similar levels of prior achievement but did not attend the camps. Comparison groups drew from administrative and demographic student-level data. We matched on various characteristics including socioeconomic status, gender, English-learner status, special education status, and previous math grade point averages. The statisticians performed several sensitivity tests to ensure the robustness of the findings.

The Loveless piece critiques the use of Mathematics Assessment Resource Services (MARS) tasks as pre- and post-tests. These particular tasks were chosen because they are well-respected assessments scored by external evaluators. The Loveless writeup omits the rich legacy of these assessments, and their importance in assessing mathematical understanding.

Non-enrolled students did not take the MARS assessments because the tests were administered in two 40-minute blocks of time and were typically administered on the first day of camp. The students did not work on “similar problems” in camp, and the questions were chosen to measure algebraic understanding. These assessments were used alongside math GPAs as a measure of change, and were included in the statistical model as just one of perhaps many dimensions of mathematical understanding.

The Loveless piece further omits an important finding of the Stanford camp study which showed that, across the 10 Youcubed camps sites, the more hours the students spent in their respective camps, the greater their improvement on assessments.

Lastly, Loveless’s critique of the proposed California Framework’s approach to “automaticity” steps over the fact that the framework plainly highlights numerical understanding as an absolute necessity, positing that students can learn math facts and other number bonds with deeper and more expansive levels of understanding, than through rote memorization.

Loveless closes his Youcubed takedown by saying “if the Youcubed gains are to be believed, all pandemic learning loss can be restored, and additional gains achieved, by two to four weeks of summer school.”

Youcubed does stand by its conclusions. It is probably true that if students were to receive concentrated and focused interventions that targeted mathematical understanding, rather than the same sort of rote memorization that has been taught not just for decades, but for centuries, any loss of mathematics achievement caused by the Covid pandemic could be reversed. Our camps provide just one example of how this can be done.

Youcubed is an open organization that encourages expansive thought and free-flowing collaboration. That same spirit infuses how we teach others to teach — interactively and cooperatively, with an eye always trained on what works, and what will stick with students over the long haul, as they traverse the earliest years of their schooling, and then transition to whatever future path may suit them best.

A response from Tom Loveless to this letter is available at “Stanford Summer Math Camp Defense Doesn’t Add Up, Either.”

The post Stanford Summer Math Camp Researchers Defend Study appeared first on Education Next.

]]>
49716677
Red Sox Turn Fenway Park into “Learning Lab” for Boston 6th Graders https://www.educationnext.org/red-sox-turn-fenway-park-into-learning-lab-for-boston-6th-graders/ Fri, 05 May 2023 12:41:03 +0000 https://www.educationnext.org/?p=49716642 “The key to unlock opportunity is education and hard work,” students are told at launch event

The post Red Sox Turn Fenway Park into “Learning Lab” for Boston 6th Graders appeared first on Education Next.

]]>
Students from the 6th grade at Nathan Hale School complete a "bingo challenge" as part of the Red Sox Hall of Fame stop on their guided tour of the Fenway Park Learning Lab.
Students from the 6th grade at Nathan Hale School complete a “bingo challenge” as part of the Red Sox Hall of Fame stop on their guided tour of the Fenway Park Learning Lab.

A class of 6th graders from the Nathan Hale School in Boston’s Roxbury neighborhood listened intently as Linda Henry explained how her husband John Henry’s childhood interest in baseball statistics had inspired a love of math that fueled his career, from a card-counting blackjack system to a statistically based commodities company to, eventually, owning the Red Sox.

Then the students heard the head of MassMutual U.S., Mike Fanning, talk about how he had lived as a kid with his immigrant grandparents, who taught him that “America is a land of opportunity,” and that “the key to unlock opportunity is education and hard work.”

Fanning described how Henry and the ownership group that bought the Red Sox in 2002 used data analytics to help the team win four World Series victories, breaking a dry spell that had dated back to 1918. “Be curious, and be a lifelong learner,” he advised the students.

And then, after a surprise visit from Red Sox third baseman Rafael Devers, who earlier this year signed a $331 million, 11-year contract extension with the team, the students went off on the first-ever tour of what the team is calling the Fenway Park Learning Lab.

Red Sox third baseman Rafael Devers speaks with a 6th grade student from Nathan Hale School at a kickoff event for Fenway Park Learning Lab.
Red Sox third baseman Rafael Devers speaks with a 6th grade student from Nathan Hale School at a kickoff event for Fenway Park Learning Lab.

The Red Sox Foundation and the Mass Mutual Foundation plan to start with 1,000 students this year and ramp up over the next 4 years so that every Boston Public Schools 6th grader visits the ballpark for the educational experience. The district currently enrolls 2,852 6th graders, according to the state Department of Elementary and Secondary Education.

The six-stop tour has students learning history, geography, math, and science. Student visitors get baseball caps, t-shirts, and a backpack full of other souvenir items like baseball cards, binoculars, a calculator, and a pen. The most important piece of equipment may be a 40-page, seriously substantive workbook, developed with the Boston Public Schools, that students work their way through along the hourlong guided tour.

A baseball stadium turns out to be surprisingly fertile ground for teaching a wide range of subjects. Looking at the Red Sox win-loss record on the manually operated scoreboard on the left field “Green Monster” wall, students are asked to figure out how many games the team has played, and the team’s winning percentage. A stop at the Red Sox Hall of Fame has students reading the text on the historic plaques—and writing a plaque “for someone in your life who inspires you.” The turnips, arugula, scallions, and swiss chard growing in 2,400 milkcrates in the rooftop “Fenway Farms” are an occasion for a brief science lesson about oxygen and carbon dioxide. During a visit to the Fenway press box, students look at maps to locate the home countries and states of Red Sox players. History, and the integration of baseball, comes up when students peer out at the number 42 posted at Fenway in memory of Jackie Robinson. Financial literacy is taught by having students budget a meal based on a Fenway concession menu.

Fenway has been open to guided visits by paying tourists for years. The tour guide for the Hale School students, David Ranen, had more than 40 years of experience as an educator, 39 of them working for the Amherst-Pelham School District in the Pioneer Valley of Western Massachusetts. He has served as a music teacher and guidance counselor and seemed quite comfortable questioning the students, keeping their attention, and leading the students through the educational activities.

Seven or eight of the students, and Hale school principal Candice Whitmore, raised their hands to indicate it was their first time ever inside Fenway Park. It was a reminder that for the Red Sox, the program might work not only as a way to help its hometown but also as a way to cultivate a new generation of fans.

The executive director of the Red Sox Foundation, Bekah Salwasser, responded to a question from Education Next in a telephone interview in advance of the May 4 event by saying that the foundation would be open to expanding the opportunity beyond Boston Public Schools district. With additional funding and staffing, she said, the program could “ideally spread far and wide,” to also include charter and parochial school students, and perhaps even those outside Boston or Massachusetts, as far away as Maine or Connecticut. “Red Sox nation is everywhere,” she said.

A 2014 Education Next research article found educational benefits of a field trip to an art museum. This program is too new to evaluate, but if the effects of early, inspirational exposure to baseball statistics on Boston public school 6th graders are anything like what they were on John Henry, the impact may be both significant and positive.

Ira Stoll is managing editor of Education Next.

The post Red Sox Turn Fenway Park into “Learning Lab” for Boston 6th Graders appeared first on Education Next.

]]>
49716642
GPT-4 Is Still Not Ready to Teach Geometry https://www.educationnext.org/chat-gpt-4-is-still-not-ready-to-teach-geometry/ Wed, 12 Apr 2023 09:00:51 +0000 https://www.educationnext.org/?p=49716496 But the forthcoming Wolfram plugin could change that.

The post GPT-4 Is Still Not Ready to Teach Geometry appeared first on Education Next.

]]>

Illustration of three triangles in glowing neon colors

Will the new update of an artificial intelligence platform be useful in teaching geometry?

On January 4, just after the holiday break, Education Next published my article “ChatGPT Is Not Ready to Teach Geometry (Yet).” I showed that version 3 of the popular chatbot developed by OpenAI (GPT-3) could give a passable explanation of Pythagoras’ theorem, but had serious limitations when asked to do other things that math teachers commonly do. In particular, GPT-3 could not describe student misconceptions or catch certain student errors. It couldn’t visualize a triangle or do anything equivalent, and it made a surprising computational error when squaring a number. I concluded that, despite the clear disruptive potential of AI for education, GPT-3 had little practical use for teaching geometry and might actually do more harm than good.

On March 14, OpenAI released version 4 of ChatGPT (GPT-4) to paid subscribers, and within days friends and colleagues—including EdNext’s managing editor—were telling me I ought to take a second look. “GPT-4 can do math now!” one friend told me. Indeed, OpenAI had published results claiming that GPT-4 could score above the 40th percentile on the AP Calculus BC exam and above the 80th percentile on the SAT Math exam, the AP Statistics exam and the AP Microeconomics exam. In addition, GPT-4 has the ability to combine graphics and text.

So it seemed worth asking again whether GPT-4 is ready to teach geometry.

The answer was still no. GPT4 started out all right, but within a few minutes it was suggesting that Pythagoras’ theorem wasn’t limited to right triangles. And by the end it was painted into a corner where it couldn’t give a straight answer about whether a triangle had a right angle or which side of a triangle was the longest.

But by the time I finished writing this article, there was a new announcement that could improve performance a lot. GPT will soon be able to connect to Wolfram Alpha, another program that’s much better at math.

Its Explanation is Fluent, but Not Entirely Accurate or Accessible

I started with the same question that I had asked GPT-3 over the holidays:

Explain how to use Pythagoras’ theorem in terms that a middle school math student would understand.

GPT-4 offered a longer and more cheerful explanation than GPT-3 had:

Pythagoras’ theorem is a math rule that tells us how to find the length of one side of a right triangle if we know the lengths of the other two sides.

So, what’s a right triangle? It’s a special kind of triangle that has one angle that measures exactly 90 degrees, like the corner of a square. You can tell if a triangle is a right triangle if one of its angles is a square corner.

Now, the theorem says that if you take the length of the shortest side of the right triangle (we call that side the “base”) and square it (multiply it by itself), and then take the length of the other side of the triangle (we call that side the “height”) and square it, and then add those two numbers together, you get the square of the length of the longest side of the triangle (which we call the “hypotenuse”).

It’s an OK explanation, but not every detail was correct. Contrary to GPT-4’s definition, the “base” is not necessarily the shortest side, and in fact it’s hard to see what the word “base” means unless you draw the triangle and orient it a certain way. One of GPT-4’s new features is the ability to integrate pictures and text, but it didn’t make use of that here. It would later.

GPT-4 summed up:

That might sound a little complicated, but it’s actually pretty simple. It means that if you know the lengths of two sides of a right triangle, you can figure out the length of the third side by using this formula. And that’s Pythagoras’ theorem!

What formula? GPT-4 hadn’t presented a formula in symbolic form, and although seeing a formula doesn’t always help students, it would have helped here. In fact, without a formula that can be rearranged to solve for different terms, it’s hard to see how knowing the length of any two sides would let you figure out the third.

Overall, although GPT-4 did a great job keeping its language at a middle school level, I doubt that a middle schooler unfamiliar with Pythagoras’ theorem would really get what GPT-4 was talking about.

Next I asked GPT-4:

Give an example.

And it did:

Sure, let’s say we have a right triangle with a base of 3 units and a height of 4 units. We want to find the length of the hypotenuse.

Using Pythagoras’ theorem, we can find the length of the hypotenuse as follows:

Square the length of the base: 3^2 = 9
Square the length of the height: 4^2 = 16
Add those two numbers together: 9 + 16 = 25
Take the square root of that sum: √25 = 5

So the length of the hypotenuse of this right triangle is 5 units.

The example is correct and detailed, and GPT-4 used bullet points to improve readability—something GPT-3 hadn’t done over the holiday. That said, it still didn’t state the formula or draw a picture, so a student might find the calculations a little mysterious if they’d never seen Pythagoras’ theorem before.

Next I asked:

What if you just know the length of the hypotenuse? Can you use that to get the lengths of the legs?

GPT-4 began:

Yes…

But technically the correct answer is No. If you just know the hypotenuse, that is not enough to get the length of both legs. In addition to the hypotenuse, you need one more leg or angle (in addition to the right angle) to get the length of the final leg.

To be fair, the rest of GPT-4’s answer suggested that it knew this (to the extent that a chatbot “knows” anything). Perhaps GPT-4 was just being over-agreeable, like an upbeat teacher who doesn’t like to demoralize students by too often saying No. Here’s GPT-4’s full answer:

Yes, you can use Pythagoras’ theorem to find the lengths of the legs of a right triangle if you know the length of the hypotenuse and one of the legs.

Let’s say you have a right triangle with a hypotenuse of 10 units and one leg that measures 6 units. We can use Pythagoras’ theorem to find the length of the other leg as follows:

Square the length of the known leg: 6^2 = 36
Square the length of the hypotenuse: 10^2 = 100
Subtract the squared length of the known leg from the squared length of the hypotenuse: 100 – 36 = 64
Take the square root of that difference: √64 = 8

So the length of the unknown leg of this right triangle is 8 units.

Since all GPT-4’s examples so far involved variants of the classic 3-4-5 right triangle, I decided to give it an example with decimal numbers:

Suppose the hypotenuse of a right triangle is 7.1 miles long, and one of the legs is 3.85 miles long. How long is the other leg?

And after a few paragraphs of calculation GPT-4 gave the correct answer, 5.97—avoiding a computational error that had caused its predecessor, GPT-3, to give a slightly incorrect answer, 5.93, when I asked the same question over the holidays.

Its Discussion of Misconceptions Has Not Improved

As I wrote over the holidays, though, there’s more to teaching than giving correct explanations and examples. Teachers must also be able to field errors and misconceptions that might surface as students learn. Over the holidays, GPT-3 had bobbled when I asked this question:

What are some common misunderstandings of Pythagoras’ theorem?

GPT-4 gave a better answer when I asked this same question in March, but before I could copy it into my article, ChatGPT went down for a few days, and when it came back up my history was gone.

So I asked the question again, and now GPT-4 gave an answer that was just as bad as the one GPT-3 had given over the holidays. GPT-4  listed five “misconceptions,” but most of them were not misconceptions at all. To take things slightly out of order, GPT-4 best example of a misconception was this:

The theorem does not work for all triangle shapes.

That is correct, but then GPT-4 went slightly off the rails:

Another common misunderstanding is that Pythagoras’ theorem works for all types of right-angled triangles, regardless of their shape. However, the theorem only works for triangles where the sides form a 90-degree angle, and the side opposite the right angle is the longest side of the triangle.

“Wait,” I asked, “what other kind of right triangle is there?” GPT-4 answered:

I apologize for the confusion in my previous response. You are correct that the only type of right-angled triangle is one where one of the angles measures 90 degrees. Therefore, there is no other type of right triangle.

What I meant to convey was that Pythagoras’ theorem only applies to right-angled triangles, where one of the angles is a right angle (90 degrees), and where the side opposite the right angle is the longest side of the triangle.

Again, I apologize for any confusion my previous response may have caused. Thank you for bringing this to my attention.

This was a graceful recovery, but it amounted to an admission that the “misconception” GPT-4 had identified wasn’t a misconception at all. And it revealed that GPT-4 doesn’t have an entirely stable concept of what a right triangle is. In fact, it really doesn’t have a “concept” of triangles at all.

A debate currently raging is whether GPT-4’s ability to talk fluently about a wide variety of topics means that it really understands those topics in some way that gets behind the language. For example, OpenAI Chief Scientist Ilya Sutskever has asked: “What does it mean to predict the next token well enough? … It means that you understand the underlying reality that led to the creation of that token.”

In the case of right triangles, I think the answer is clearly that ChatGPT doesn’t understand them. Of course, it often doesn’t generate the right next token, either.

Here’s another “misconception” that GPT-4 highlighted:

The converse of Pythagoras’ theorem…(i.e., that a triangle with sides satisfying the equation of the theorem must be a right-angled triangle) is not always true. There are some triangles that satisfy the equation of the theorem but are not right-angled triangles.

“Really?” I asked. “Give an example of a non-right triangle that satisfies Pythagoras’ theorem.” And again GPT-4 backtracked:

I apologize for the error in my previous response. Pythagoras’ theorem only applies to right-angled triangles, and a non-right triangle cannot satisfy the theorem. I appreciate you catching that mistake.

GPT-4’s last two “misconceptions” were that Pythagoras’ theorem does not work with complex numbers or in non-Euclidean geometries. In fact, there are versions of Pythagoras’ theorem that work for complex numbers and some but not all non-Euclidean geometries. But I let that go because, clearly, the conversation had gone beyond a middle school level.

Randomness and Context Can Change ChatGPT’s Answers

Earlier I mentioned that ChatGPT went down for a couple of days and lost our history, and the fact that its answers changed when it came back up reveal a couple of things about how it works.

First, its answers are not deterministic. ChatGPT runs on probabilities, so every answer goes in a different and slightly random direction. If you ask the same question twice, you may get different answers. I had forgotten that, so was unduly impressed by the first list of misconceptions that GPT-4 gave me, which were better than the not-really-misconceptions that GPT-3 had supplied over the holidays. I mistakenly thought that meant that GPT-4 had gotten better at identifying misconceptions, but when it went down, came back up, and described misconceptions again, the new list was just as bad as the one that GPT-3 had given. Evidently GPT-4 hadn’t improved in this respect; it had just gotten lucky the first time I asked.

Second, GPT-4 depends on context. In my initial conversation with GPT-4, I asked it to keep things on a middle school level, and it did, but when it went down it lost that history, and when it came back up and I asked it for misconceptions, it started talking about complex numbers and non-Euclidean geometries—which are topics that rarely come up until college engineering or graduate math courses. I should have reestablished the context by asking it for a middle school explanation again. This is easy to forget.

It Still Falls Down When Provided Contradictory Information

Finally I tried to describe a right triangle without identifying it as such explicitly. The triangle I had in mind was this one:

Triangle example 1

I began by saying:

Suppose a triangle has three sides called A, B, and C. A is 7 inches long and B is 7 inches long.

But then I made a mistake. I should have said:

The angle between A and C is 45 degrees, and so is the angle between B and C.

But instead I said:

The angle between A and C is 45 degrees, and so is the angle between A and B.

What I described was not a right triangle, and in fact it’s not a triangle at all. There is no way to draw a triangle with sides and angles as I’ve described them. I made this same mistake when talking to GPT-3 over the holidays, and I decided to repeat it because GPT-3’s response had been so revealing. Any tutor or teacher has to field students’ errors, and there are several ways to do it. You can correct the error; you can explain or demonstrate why it’s an error; or you can ask students to repeat or correct themselves.

Neither GPT-3 nor GPT-4 did any of these things. Instead, they ran with the example and quickly got themselves into trouble.

After describing my impossible triangle, I asked:

What is the length of side C?

GPT-4 answered:

To solve this problem, we can use the law of cosines, which relates the sides and angles of a triangle.

And after some detailed calculations and emerged with the answer

The length of side C is approximately 3.63 inches.

“Great!” I said, a little disingenuously. “Can you draw the triangle?” GPT answered:

Sure! Here’s a rough sketch of the triangle:

Triangle example 2

In this triangle, sides A and B are both 7 inches long, and the angle between them measures 45 degrees. Side C is the side opposite the 45 degree angle, and we have just calculated that its length is approximately 3.63 inches.

“I don’t understand,” I said. “If sides A and B are both 7 inches, why does A look so much longer than B? And why does C look longer than B if C is only 3.63 inches?”

Again GPT-3 apologized, but this time it couldn’t recover:

I apologize for any confusion caused by the sketch. My intention was to provide a rough drawing to help visualize the triangle, but I understand that the proportions may be difficult to interpret.

To clarify, in a right-angled triangle where two sides have equal length (as in this case with sides A and B), the hypotenuse (side C) will be longer than either of the other two sides. However, in this case…we found that it is approximately 3.63 inches. This means that it is shorter than either of the other two sides, A and B. Therefore, the proportions in the original sketch were not accurate.

I asked: “How can you say C is the longest side and then say that it is the shortest side? And how can you say C is the hypotenuse when A is the hypotenuse in the drawing?”

GPT-4 apologized.

Then I pointed out: “The angle between A and B doesn’t look like it’s 45 degrees.” GPT-4 apologized again and provided an “updated version” that looked exactly the same, except that the angle between A and B was labeled as 45 degrees when it clearly is not:

Triangle example 3

I asked GPT-4 for the angle between A and C. After some calculations using the law of cosines, GPT-4 answered: 101 degrees. Then I asked for the angle between B and C. It answered 34 degrees, because the angles of a triangle must add to 180 degrees. Then I asked how this could be a right triangle if none of the angles was 90 degrees. I asked how the angle between A and C could be 101 degrees when it looks so much smaller than the other angles. I asked it how the angle between B and C could be 34 degrees when it looked for all the world like a right angle. And on and on.

GPT politely apologized, but it kept going in circles, drawing the same picture over and over and saying contradictory things about it. Interestingly, the picture was different than the picture GPT-4 had drawn before the service went down in March. That picture has disappeared from GPT-4’s history, but if my memory serves it looked something like this:

Triangle example 4

It wasn’t a right triangle at all, and and GPT-4 couldn’t tell me clearly if the symbols A, B, and C represented the sides, the angles, or the vertices.

It may seem a little odd that GPT-4 would generate two such different pictures from the same prompt, but it makes more sense when you remember two things. First, every response from ChatGPT contains a random component. Second, no picture could be correct, since the information I supplied could not describe any real triangle.

Cause for Worry, Cause for Hope

When I first proposed to write about ChatGPT’s math ability in December, it may have been fair to respond that no one had proposed to use ChatGPT as a math tutor.

What a difference a few months have made. On Friday, Axios published a story titled “Sal Khan explains why GPT-4 is ready to be a tutor,” which reported that Newark, New Jersey, and Hobart, Indiana, had joined a pilot of a new product called Khanmigo, which uses ChatGPT to help tutor math. I don’t think ChatGPT is ready for this, but in the story Khan says that “it’s getting better,” and “stresses that Khanmigo didn’t just take GPT-4 out of the box — it also added its own ‘secret sauce’ to help avoid math errors.”

I hope it works. I hope kids in Newark and Hobart aren’t struggling with a chatbot that can be as confusing and confused as the one that I interacted with. I hope teachers in Newark and Hobart are keeping a close eye on the situation. I hope the districts have a well-defined and rigorous process for trying and evaluating new technologies that use “secret sauce.”

But I don’t know, and I’m a little worried.

Meanwhile, the technology continues to evolve. While the next version, GPT-4.5, is scheduled for release in September or October, what interests me more is the announcement of a plug-in that connects ChatGPT to Wolfram Alpha—an older technology, released in 2009, that solves math problems and helps answer questions involving math and data. This is exactly what I suggested in my January 4 article (not that anyone was looking to me for advice), and it sounds very promising because Wolfram Alpha is fundamentally built for math in a way that GPT, as a language model, is not.

The integration of ChatGPT and Wolfram Alpha hasn’t been released yet, but it may bring us closer to the “not-too-distant future” that I speculated about in January—a future when “we may have intelligent programs that can tutor students in specific subjects—programs that can converse in natural language [like ChatGPT], draw on deep and accurate representations of subjects like geometry [like Wolfram Alpha], and recognize and correct the common missteps and misconceptions that lead to wrong answers.”

At least I hope so. But we’ll have to see.

Paul von Hippel is a professor and associate dean of research in the LBJ School of Public Affairs at the University of Texas, Austin. This article is dedicated to his incomparable 10th-grade geometry teacher, Glenn Gabanski.

The post GPT-4 Is Still Not Ready to Teach Geometry appeared first on Education Next.

]]>
49716496
Numeracy for All https://www.educationnext.org/numeracy-for-all-four-key-ways-help-students-understand-math/ Wed, 18 Jan 2023 10:00:06 +0000 https://www.educationnext.org/?p=49716214 U.S. kids were struggling in math even before the pandemic. Here are four key ways to help students understand.

The post Numeracy for All appeared first on Education Next.

]]>
Three plates of cookies
The Concrete to Pictorial to Abstract framework provides students with a deeper understanding.

For the first time since the 1990s, in nearly every state, fourth graders and eighth graders who took the National Assessment of Educational Progress exam lost ground in math. Leading researchers have stipulated that low-income students may have lost up to 22 weeks of learning instruction. The surrounding rhetoric indicates that even with learning recovery efforts, we may have lost a generation of math learners.

While these sobering statistics have provided a long overdue wake-up call, kids have been struggling in math for decades. Even before the pandemic, the United States ranked 37th in math out of 77 countries that participated in the latest Programme for International Student Assessment. The pandemic has only intensified the need to ensure U.S. students deeply understand math.

As the CEO and co-founder of Zearn, I’ve spent the past decade working alongside educators and researchers studying real-time data on building math understanding. What I know for sure is that every kid can learn math and that we can take steps to create a numerate society. Among the key steps to doing that:

Create learning experiences that equip students to make sense of math.

As a sixth grader in Buffalo, New York, I was in the advanced math class. I remember math as flipping to the next chapter and seeing a brand-new thing that I was going to learn. With each new chapter, I memorized a list of often disconnected procedures.

My memory reflects what international rankings, like PISA, have shown: the traditional American approach to math is not a formula for deep understanding. U.S. students tend to learn by memorization and employ tricks rather than developing true understanding. Across the board, students who characterize themselves as memorizers score lowest on the PISA exam. By contrast, kids outside of the U.S. performed better on PISA because they are more than an inch deep in their math understanding. When teachers present math as a progression of a few big ideas, it leads to deeper understanding.

One of the most proven approaches to do this is the Concrete to Pictorial to Abstract framework. This teaches kids to understand math concepts in an intuitive and tangible way and helps them to see how they can apply their knowledge to real-world problems.

For instance, rather than tell students to memorize that 3 x 0 = 0 or N x 0 = 0 or anything times zero is zero, we start with the concrete. We take out three plates. If the three plates have one cookie each, then we have three cookies or 3 x 1 = 3. If the three plates have two cookies each, then we have six cookies or 3 x 2 = 6.

If we, however, have three plates and zero cookies on each plate, then we have zero cookies, so 3 x 0 = 0. Similarly, if we had 20 plates with zero cookies on them, then that is zero cookies too. It then becomes clear to kids why N x 0 = 0. It’s not a rule to memorize. It’s an idea to understand.

This is how the Concrete to Pictorial to Abstract framework provides students with a deeper understanding. The framework also gives students an anchor when they don’t know how to start or solve a problem. Students who learn this way will look at equations and try to turn them into pictures or stories they can concretely understand. Ultimately, that helps them to tackle any problem.

Accelerate the math learning of every student.

During a short period of time in my childhood, I was very sick and missed a lot of school. When I returned, teachers would try to work with me on every concept I had missed. This was challenging, but we’re talking about weeks of unfinished learning.

Today’s teachers have an unprecedented task of addressing more than two years of disrupted learning, and they simply do not have enough time to go back and reteach everything.

The great news is they don’t have to. A better way forward exists.

Learning acceleration is a promising approach that focuses on teaching students lessons appropriate for their grade level, and reteaching only the skills and lessons from earlier grades that are necessary to understand the new content.

Imagine a seventh grade lesson on negative numbers. Part way through the lesson, a student – let’s call her Brianna – is confronted with this problem: 1.4 / 2 = ?

Brianna is stumped on operations with decimals, specifically division. She is supposed to be learning about negative numbers, a new idea, but she is stuck here.

In a traditional remediation approach, she would be stopped here, and she might not see negative numbers again for weeks or months. Because division of decimals is a fifth-grade concept, she would stop moving forward in and spend her time catching up on fifth-grade content.

Learning acceleration is a different way of helping Brianna. Instead of going back and doing an extensive review of decimal operations, she begins with what she knows: whole-number division (14 / 2=?). She is provided a short lesson that demonstrates how dividing decimals follows all the same rules as dividing whole numbers. Then she goes back to the negative number lesson.

The next few times she confronts decimal operations she needs these short lessons again in the context of her seventh-grade learning. After a little while, however, Brianna has caught up on decimal operations and all the while moved forward in her 7th grade learning of negative numbers.

Implement scalable and coherently connected, extra learning time.

Extra learning time – often in the form of tutoring – is something often only afforded to select students. My sixth-grade twin boys have been fortunate to receive extra support in areas they have needed more help.

Not everyone is so fortunate. To ensure all kids can catch up and move forward in math, we must implement extra learning time at scale. Moreover, this extra time must be coherently connected to what students are learning in core math time. It also must address any confusion on specific topics being taught. Absent this alignment, we are wasting students’ extra learning time.

Recently, one of my sons needed help with ratios, which he was learning in his math class. To promote understanding, his teacher used extended time to address his specific questions and present ratios in a different way. I am grateful that this extra learning time was dedicated to understanding ratios versus having him spend time on concepts he may not have fully mastered from prior grades.

Tennessee and Texas educational leaders have approached their statewide, high-impact tutoring programs in this manner.

Share real-time data to make ongoing, evidence-based improvements.

Data must be used better to continuously improve: to identify what is and isn’t working and to effectively communicate what needs to change to best support students’ learning. Leaders must employ data to drive decisions from selecting curricula and leveraging technology to determining how extra learning time is spent.

By sharing real-time data and partnering with experts from various fields, we can answer the most important questions about how kids come to understand the big ideas in math and apply them in real life. Progress in teaching and learning will not come from making a single, massive change overnight, but from many small, yet critical, evidence-based improvements along the way.

Math is imperative for individuals and for society. It has been critical to my success and my enjoyment as a learner. For this to be true for every kid, we must build a system that ensures numeracy for all. For every student to deeply understand and, dare I say, love learning math, every adult who touches students’ school experiences has a role to play.

Shalinee Sharma is CEO and co-founder of Zearn, a nonprofit educational organization behind the math learning platform used by 1 in 4 elementary-school and 1 million middle-school students nationwide.

This article appeared in the Spring 2023 issue of Education Next. Suggested citation format:

Sharma, S. (2023). Numeracy for All: U.S. kids were struggling in math even before the pandemic. Here are four key ways to help students understand. Education Next, 23(2), 71-72.

The post Numeracy for All appeared first on Education Next.

]]>
49716214
ChatGPT Is Not Ready to Teach Geometry (Yet) https://www.educationnext.org/chatgpt-is-not-ready-to-teach-geometry-yet/ Wed, 04 Jan 2023 10:00:42 +0000 https://www.educationnext.org/?p=49716173 The viral chatbot is often wrong, but never in doubt. Educators need to tread carefully.

The post ChatGPT Is Not Ready to Teach Geometry (Yet) appeared first on Education Next.

]]>

Hand writing out Pythagoras Theorem on chalkboard

In November 2022, the startup OpenAI released a “research preview” of ChatGPT, a computer program that could quickly and fluently answer questions posed in natural language, write essays and fictional tales in response to prompts, and hold up its end of a written conversation on a startlingly wide range of topics. ChatGPT and its successors will disrupt and transform many human activities—including education—for better or for worse.

To date, most education reporting has highlighted ChatGPT’s potential to facilitate cheating and plagiarism. On HITC, Ellissa Bain described how ChatGPT could “Write Your Papers in Seconds,” explaining that this is “great for students” and “examiners will never know.” It’s not at all great for student learning, but it can definitely fool examiners; a team at the New York Times Upshot confirmed that a small group of experts—including a teacher, a professor, a tutor, and a famous children’s author—often failed to distinguish between ChatGPT’s output and the writing of actual fourth and eighth graders in response to essay prompts from the National Assessment of Education Progress. In The Atlantic, teacher Daniel Herman pronounced “The end of high-school English,” explaining that “what GPT can produce right now is better than the large majority of [student] writing.” ChatGPT could compare Hamlet to Beloved, illustrate Buddhist ideas using a pet hamster, and write a convincing essay describing how its (fabricated) experience volunteering at a pet shelter had prepared it for success at Stanford. The bot’s ability to fabricate an admissions essay raises the question of whether colleges that no longer require standardized tests will need to drop the essay requirement as well.

But does ChatGPT have legitimate educational uses? I got to wondering if ChatGPT could offer correction and guidance as students worked through math problems. Some tutoring systems (such as Mathia or ASSISTments) can offer help when students go off-track, but that help often comes in the form of canned text or video highlighting challenges that, though common, do not always address what is tripping up a specific student in a particular problem. And if students remain confused, current tutoring systems cannot respond to follow-up questions in natural language.

Can ChatGPT provide feedback and answer questions about math in a more tailored and natural way? The answer, for the time being, is no. Although ChatGPT can talk about math superficially, it doesn’t “understand” math with real depth. It cannot correct mathematical misconceptions, it often introduces misconceptions of its own; and it sometimes makes inexplicable mathematical errors that a basic spreadsheet or hand calculator wouldn’t make.

Here, I’ll show you.

Can ChatGPT talk about Pythagoras?

A couple of days ago, I asked ChatGPT the following question:

Explain how to use Pythagoras’ theorem in terms that a middle school math student would understand. Include an example.

ChatGPT provided its answer in less time than it would take me to type it:

Pythagoras’ theorem is a way to find the length of a missing side of a right triangle (a triangle with one 90 degree angle). It states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the other two sides.

So far I was impressed. I’d prefer to see these two long sentences unpacked into three or four short ones, and I’d like to see a right triangle defined explicitly as one with a right angle. But ChatGPT’s answer was accurate, and it followed up with examples—not just the one example I requested, but two. [1]

Imagine we have a right triangle where the two shorter sides (called the legs) are 3 inches and 4 inches long. We want to find the length of the hypotenuse (the side opposite the right angle).

According to Pythagoras’ theorem, we can find the length of the hypotenuse by using the formula:

hypotenuse^2 = leg1^2 + leg2^2

Plugging in the values from our triangle, we get:

hypotenuse^2 = 3^2 + 4^2
hypotenuse^2 = 9 + 16
hypotenuse^2 = 25

So the length of the hypotenuse is the square root of 25, which is 5 inches.

Here’s another example:

Imagine we have a right triangle where the hypotenuse is 5 inches long and one of the legs is 3 inches long. We want to find the length of the other leg.

We can use Pythagoras’ theorem to solve for the missing leg by using the formula:

leg^2 = hypotenuse^2 – leg1^2

Plugging in the values from our triangle, we get:

leg^2 = 5^2 – 3^2
leg^2 = 25 – 9
leg^2 = 16

So the length of the missing leg is the square root of 16, which is 4 inches.

I hope this helps! Let me know if you have any questions.

These examples were accurate, too, but they both described the same right triangle, with sides of length 3, 4, and 5—a longstanding cliché in introductory geometry. It would have been better to describe two different triangles, and it would have been better to explain where the formula for the second example came from. I asked for clarification:

Wait. Where did you get this formula? leg^2 = hypotenuse^2 – leg1^2

ChatGPT explained:

In a right triangle, the hypotenuse is the side opposite the right angle, and the other two sides are called the legs. The formula I provided, leg^2 = hypotenuse^2 – leg1^2, is simply another way to write Pythagoras’ theorem, which states that in a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the other two sides.

In other words, if you know the length of the hypotenuse and one of the legs in a right triangle, you can use this formula to find the length of the other leg.

This bit of arm-waving—it’s “just another way”—would not be terribly useful to a confused student wondering how the second formula was related to the first. ChatGPT followed up with an example, but it was the same example it had already given, worded exactly the same way.

So far ChatGPT’s output wasn’t pedagogically optimal, but it wasn’t terrible, either, and I couldn’t say that every human geometry teacher would have explained things better.

But does ChatGPT really “get” Pythagoras?

But could ChatGPT go deeper? Teaching consists of more than explanations and examples. A Schaum’s Outline can provide those. But good teachers can also anticipate, diagnose, and correct student misunderstandings. So I asked ChatGPT:

What are some common misunderstandings of Pythagoras’ theorem?

ChatGPT responded with four examples, which I’ll take just slightly out of order. The first three were howlers:

Some common misunderstandings about Pythagoras’ theorem include:

    1. Believing that it only works for right triangles. While Pythagoras’ theorem is most commonly used to find the length of a missing side in a right triangle, it can also be used to determine whether a triangle is a right triangle. If the lengths of the sides of a triangle satisfy the formula of Pythagoras’ theorem (i.e., if the square of the length of the hypotenuse is equal to the sum of the squares of the other two sides), then the triangle is a right triangle.
    2. Believing that it only works for right triangles with a 90 degree angle: Pythagoras’ theorem is specifically designed for right triangles, which have one 90 degree angle. However, it is possible for a triangle to have two acute angles (angles less than 90 degrees) and still be a right triangle.
    3. Believing that it only works for right triangles with legs that are perpendicular to each other: In a right triangle, the sides that are not the hypotenuse are called the legs. While it is true that the legs of a right triangle are typically perpendicular to each other, this is not a requirement for the theorem to be applicable. As long as a triangle has one 90 degree angle, Pythagoras’ theorem can be used to find the length of a missing side.

Not one of these three “common misunderstandings” is actually wrong, and the fact that ChatGPT listed them as wrong suggests that it has some misunderstandings of its own. ChatGPT doesn’t seem to know that Pythagoras’ theorem only applies to right triangles, and it doesn’t realize that all right triangles have a 90 degree angle and legs that are perpendicular to each other. It also doesn’t seem to realize that no triangle can have more than one angle that is 90 degrees or greater, and that implies that it doesn’t know a fundamental fact of geometry: that the angles of a triangle must add to 180 degrees.

The other misconception that ChatGPT provided had more to it:

    1. Believing that it only works for right triangles with legs of integer lengths: Pythagoras’ theorem can be used to find the length of a missing side in a right triangle with sides of any length, not just integer lengths.

This is in fact a misconception. Unfortunately it is one that ChatGPT reinforced by repeatedly using the example of a right triangle whose sides are 3, 4, and 5 inches long. I gave the program some credit for recognizing the issue, but I also wondered if it could handle non-integer lengths itself. So I asked:

Suppose the hypotenuse of a right triangle is 7.1 miles long, and one of the legs is 3.85 miles long. How long is the other leg?

ChatGPT’s answer was clear but contained a surprising error:

To find the length of the other leg, you can use the Pythagorean theorem…. In this case, the hypotenuse has length 7.1 miles, and one leg has length 3.85 miles. Let’s call the length of the other leg x. The Pythagorean theorem can be written as an equation as follows:

(7.1)^2 = (3.85)^2 + x^2

Solving for x, we get:

x = sqrt((7.1)^2 – (3.85)^2)

Plugging in the values and simplifying, we get:

x = sqrt(50.41 – 15.0225)
x = sqrt(35.3875)
x = 5.93 miles

Therefore, the length of the other leg is 5.93 miles.

Do you see the error? It’s minor but weird. ChatGPT plugged in the wrong value (15.0225) for the square of 3.85 (which should be 14.8225), and as a result it gave the wrong final answer (5.93 instead of 5.97). It’s not a big mistake, but it’s not just a rounding error. Any student checking the steps with a hand calculator or spreadsheet would wonder, as I did, why they weren’t getting the same result.

How can a computer program released in 2022 get the wrong answer when squaring a number? Is ChatGPT jerking my chain?

In light of ChatGPT’s confusion about the angles in a right triangle, I wondered if it would recognize a right triangle if I described it indirectly. So I started my next question:

Suppose a triangle has three sides called A, B, and C. A is 7 inches long and B is 7 inches long.

Now I meant to describe a right isosceles triangle. like this—

 

Illustration of a right isosceles triangle

—but I made a typo. I should have typed this:

The angle between A and C is 45 degrees, and so is the angle between B and C.

But instead I typed this:

The angle between A and C is 45 degrees, and so is the angle between A and B.

which is impossible if A and B are the same length, as I had previously stated they were. Try and draw a triangle that meets my specifications; you can’t.

Now ChatGPT could have expressed confusion or asked for clarification, but it didn’t. It plowed right ahead. When I finished my question,

What is the length of side C?

ChatGPT started its answer:

Since the angle between A and B is 45 degrees, and the angle between A and C is also 45 degrees, the triangle is an isosceles right triangle, where A and B are the legs and C is the hypotenuse….

This doesn’t make sense. If A and B are the legs of a right triangle, the angle between them can’t be 45 degrees; it has to be 90. ChatGPT went ahead and calculated the length of C using Pythagoras’ theorem, but it had revealed something important: it didn’t have a coherent internal representation of the triangle that we were talking about. It couldn’t visualize the triangle as you or I can, and it didn’t have any equivalent way to catch errors in verbal descriptions of visual objects.

In short, ChatGPT doesn’t really “get” basic geometry. It can crank out reams of text that use geometric terminology, but it literally doesn’t know what it is talking about. It doesn’t have an internal representation of geometric shapes, and it occasionally makes basic calculation errors.

The problem goes beyond geometry

Geometry is not the only academic area where ChatGPT has trouble. In my very next question, motivated by the role that phonics plays in teaching young children to read, I asked ChatGPT to translate a couple of sentences into the international phonetic alphabet (IPA). ChatGPT said it couldn’t do that, and I give it credit for knowing its limits, but then it suggested that I use Google Translate. When I reported back that Google Translate can’t use IPA, either, ChatGPT apologized for the misunderstanding.

What is ChatGPT doing? It is bloviating, filling the screen with text that is fluent, persuasive, and sometimes accurate—but it isn’t reliable at all. ChatGPT is often wrong but never in doubt. It acts like an expert, and sometimes it can provide a convincing impersonation of one. But often it is a kind of b.s. artist, mixing truth, error, and fabrication in a way that can sound convincing unless you have some expertise yourself.

The educational applications of a tool like this are limited. All over the internet, teachers are discussing the possible uses of ChatGPT to tutor students, write lesson plans, or generate quiz questions. They need to be careful. While ChatGPT can generate reams of basic material, and some of it will be useful, teachers need to verify everything to avoid passing on misinformation to their students.

My experience was disappointing, but perhaps I should not have been surprised. After all, on December 10, OpenAI’s CEO Sam Altman tweeted that ChatGPT has problems with “robustness and truthfulness” and “it’s a mistake to be relying on it for anything important right now.” Other experts have commented that ChatGPT sometimes “lies” or “hallucinates.” ChatGPT’s interface alerts users that the program “may occasionally generate incorrect information.” When it comes to geometry or the capabilities of Google Translate, this is a grave understatement.

These could turn out to be short-lived problems, fixed in the next version—or they could persist for many years. There are about 250 exceptionally talented people working at OpenAI, and the fact that they released ChatGPT in its present condition suggests that its problems may not have an easy fix.

In the not-too-distant future, we may have intelligent programs that can tutor students in specific subjects—programs that can converse in natural language, draw on deep and accurate representations of subjects like geometry, and recognize and correct the common missteps and misconceptions that lead to wrong answers. But we are not there today. Today some tools (e.g., Wolfram Alpha) can do geometry, and some (e.g., Mathia or CTAT) can trace some wrong answers to their sources—but those tools rely on explicit subject-specific programming and cannot converse in natural language. Meanwhile AI tools like ChatGPT can converse fluently in natural language—but don’t seem to understand the basics of core academic subjects like geometry.

Despite its limitations, ChatGPT is publicly available, and some students and teachers are going to use it. Not all of them will use it carefully. We may not be prepared for the consequences.

Paul T. von Hippel is professor and associate dean for research in the LBJ School of Public Affairs at the University of Texas at Austin.


1. I’ve added indenting to make ChatGPT’s calculations more readable.

The post ChatGPT Is Not Ready to Teach Geometry (Yet) appeared first on Education Next.

]]>
49716173
To Rethink Math Education, Focus on Common Ground https://www.educationnext.org/to-rethink-math-education-focus-on-common-ground/ Thu, 11 Aug 2022 13:31:59 +0000 https://www.educationnext.org/?p=49715619 Calculus? Data Science? Embrace the “and,” says a response to the recent forum.

The post To Rethink Math Education, Focus on Common Ground appeared first on Education Next.

]]>
Jaime Escalante was the star math teacher at James A. Garfield High School in East Los Angeles.
Jaime Escalante’s success preparing mostly low-income, minority students for the AP calculus exam was the subject of the movie “Stand and Deliver.”

Recently, I turned 60 years old (also expressed as 720 months, or 21,915 days, or 525,960 hours, or 31,557,600 minutes), and over that time I’ve seen many great debates in the education policy arena. Often those debates reach equilibrium with a realization of what Jim Collins (author of Built to Last and Good to Great) calls “the genius of the AND.” Collins points out that successful businesses realize that there is validity in each of diametrically different points of view, and the most successful companies respect and leverage both—especially when there is no clear right or wrong. The idea of the genius of the AND has found a home in the work of improving literacy outcomes (successfully, in my opinion), and I believe the same can happen with improving math achievement in the United States.

At the National Association of State Boards of Education, of which I am the President & CEO, we recognize the importance that state boards of education and state policymakers can have in prioritizing areas for action and improvement. A recent issue of our journal, The State Education Standard, is devoted to Advancing Math & Science Instruction. In it, various authors address different aspects of the current state of math and science education. One of the key takeaways for me is that there is a great deal of agreement on various aspects of how to improve math achievement, and there are also some areas where honest disagreement exists. When I see the disagreement, I challenge myself to examine how both sides might be respected to further the best interest of students.

I loved reading the different points of view expressed in the recent Education Next forum on “Rethinking Math Education.” The two sets of authors nicely lay out their points of view. In each article I found ideas I agreed with and ones to which I objected. Reading between the lines, and leveraging some of my other knowledge about education reform and mathematics, I began to see the areas of agreement, and also opportunities for embracing the AND.

Here are the concepts about mathematics achievement on which I think we can all agree:

All students are mathematically capable. Let’s face it, the biggest obstacle to more students being successful in mathematics is the pernicious myth, all too pervasive in our society, that “some people just aren’t good at math.” In too many cases, rather than continuing to work with students until the students reach confidence and comfort with mathematics, adults (especially parents and teachers) will convey (explicitly and implicitly) that they are simply not equipped to do mathematics. Once the student accepts that fiction as reality, the battle is lost.

The research is clear that the premise is patently false. Everyone can be competent and confident with math—we all need to believe this and act accordingly. We should all make a shared and tireless commitment to put a stake in the heart of this crushing lie. It should be as unacceptable to say “Johnny just can’t do math” as it would be to say “Johnny just can’t read.”

More students can be successful mastering high level mathematics. The status quo is unacceptable—both in absolute terms and in terms of equity. We know doing better is within our reach. There are plenty of case studies and proof points. One of my favorites is illustrated in the movie Stand and Deliver about the great Jaime Escalante and the success of students (mostly low-income and ethnic minority students) at James A. Garfield High School in Los Angeles in passing the Advanced Placement Calculus exam. I have seen other schools and classrooms experience similar successes.

There is good understanding about what different and better math instruction looks like. As with many content areas, there is no one-size-fits-all for what can work to help more students succeed in math. The National Council of Teachers of Mathematics published Principles to Actions: Ensuring Mathematics Success for All (2014), and the Catalyzing Change series of curriculum guides. The Southern Regional Education Board has their Powerful Mathematics Instructional Practices. Other quality curriculum models are also available. High quality instructional materials matter a lot, too. The Zearn math learning platform has shown tremendous success. EdReports.org has rated a number of math instructional products.

Deliberately focus on parents as partners. Any effort to improve instruction requires attention to the needs of parents and caregivers. Parent support can make a big difference in the implementation of new approaches, especially if parents see how they can help their child. I sympathize with the frustrations that many parents feel when they cannot help their children with math homework because it is different than how they were taught. Just because a student is approaching a mathematical concept in a different way than how the parent or caregiver learned it should not prevent them from supporting numeracy development. We can help parents better understand and play their important role in supporting learning.

Teachers need to be better equipped and supported to teach math. This is true for every subject area for which we know we can do better. The recent burst of activity around the science of reading and the ensuing signs of improvement in reading proficiency are results of the significant effort being made to help and coach more teachers in understanding what great literacy instruction is all about. It is all about teacher content knowledge, aligned curriculum, quality pedagogy and disposition. Coaching also has great power to improve teacher practice. Authors Barak and Mims and Levitt and Severts would certainly agree on the importance of well-equipped and supported teachers.

Focus is needed on elementary and middle school math. Given that the two articles address high school math, one might think that the focus should be there. But the sad reality is that a student’s ability to be successful in any high school math class, calculus or otherwise, is determined in elementary and middle school. Far too many students have given up on math by then. No matter where one stands on what math is taught in high school, we can agree that more students should emerge from middle school confident and capable in mathematics.

More math teaching should be done in the context of practical application. Nothing brings math alive, and makes retention more likely, than when it is learned in the context of real world problems and experiences. I always found students who were enrolled in construction trades and other career focused programs at career-technical education centers to have a very robust understanding of the applicable mathematics, as well as comfort in understanding math as a tool to solve problems.

In the spirit of the “genius of the AND,” here are my ideas for charting a course forward.

More students need to master algebra II and calculus, but if that can’t happen in the near term, let’s accept accomplishment in other rigorous math that keeps the doors to future success open for more students. This is a lesson that we can learn from higher education and the strategies being used to recast developmental mathematics. When many institutions of higher education figured out that they were losing a great number of students in what noted mathematician Uri Treisman called “the burial ground for aspirations” — developmental mathematics — they began to design different and better math knowledge development strategies. High school is to higher education what middle school is to high school. When high school students come to the door of higher education unprepared or underprepared, the traditional solution has been to remediate them in separate, isolated, and lengthy sequences of math courses (a strategy that historically failed). Efforts to find a better way, a way that allowed more students to be successful, led many higher education institutions to embrace approaches, like co-requisite classes, to equip students with the math they need to succeed in their chosen field of study. (I tip my hat to Complete College America and the Dana Center at the University of Texas-Austin for having led the charge in this work nationally.)

This approach has helped more students reach higher levels of math competency, and, more importantly, degree completion and gaining access to meaningful high-paying jobs. No one criticized this approach for failing to insist that students learn to the highest levels of math, because it made a huge difference for many lives by equipping them with the math needed to complete their degree programs and launch careers. If it is unlikely that a middle school student entering high school will complete the ideal path of math achievement, however we might define it, wouldn’t it make sense to identify a rigorous alternative math pathway that could align to a meaningful career pathway for which the student might achieve success?

Create mechanisms to ensure rigor. There is a fear that any alternative to the algebra/calculus route will not be rigorous. This fear can be easily addressed by enabling mechanisms that ensure rigor. After all, if a student gets a 4 on the AP Statistics exam, isn’t that a sign of a mathematical disposition that would both have value as is, as well as signal a student’s capability to master other types of math?

Set targets for more students to become proficient in algebra/calculus and also for more students to be proficient in other rigorous math. Create mechanisms that gauge progress toward equity. There is so much room for improvement to achieve measurable and significant increases in all areas of math achievement. That means all perspectives can come out ahead. The greater the improvement, the better the system becomes at helping more students reach higher and higher levels of math. In the long term, more will achieve at the highest levels. If the fear is that change will widen the equity gap, let’s establish the mechanisms to make sure that doesn’t happen.

Share the responsibility to ensure all students are confident and comfortable with mathematics. One of the key strategies in improving literacy has been to ensure that all teachers play a role in it. This includes social studies teachers, science teachers, math teachers, art teachers. The same is needed for math. Every discipline has quantitative aspects that can be explored using math.

It’s always interesting to read differing views on a particular topic in education. As a perennially hopeful person, I try to look beyond the areas of disagreement to find the common ground — and there is a lot in the mathematics space. The National Association of State Boards of Education is launching a State Mathematics Improvement Study Group for state board of education members and support staff to increase awareness and understanding of the issues. I hope that, with good information and technical assistance, state policymakers can catalyze state-designed initiatives to improve math outcomes. History has shown that when such initiatives are designed by multiple partners and when the diverse points of view are understood and respected, transformative change is possible. Now is the time to make this happen in math education.

Paolo DeMaria is president and CEO of the National Association of State Boards of Education.

The post To Rethink Math Education, Focus on Common Ground appeared first on Education Next.

]]>
49715619
A Math Teacher’s Frank and Funny Take on Math Education https://www.educationnext.org/a-math-teachers-frank-and-funny-take-on-math-education/ Thu, 17 Feb 2022 10:00:10 +0000 https://www.educationnext.org/?p=49714797 "Full of the kinds of things that teachers say privately but hesitate to speak aloud"

The post A Math Teacher’s Frank and Funny Take on Math Education appeared first on Education Next.

]]>

Cover of Out on Good Behavior by Barry Garelick

Math is fundamental. This observation is a groan-inducing cliché, but it’s also true. Math matters for employment, financial literacy, and even for navigating evidentiary claims about things like Covid-19 and climate change. Yet math education seems to have gotten sidelined amid broader debates about school culture, civics, and the rest. Lately, when math does come up, it seems like it’s due to efforts to eliminate accelerated offerings or do away with the requirement that students answer questions correctly. And, of course, this is all against the backdrop of the devastating pandemic declines in math performance.

Photo of Barry Garelick
Barry Garelick

If you’re concerned about this, where can you turn? Well, one place is a recently published book from the inimitable Barry Garelick, a second-career math teacher with a chip on his shoulder and a deep affinity for Mary Dolciani’s classic 1962 math textbook Modern Algebra. Garelick, who readers likely already know from his various books and articles (in fact, he penned one of the more popular Rick Hess Straight Up guest letters last year), has delivered a work that’s filled with bracing, laugh-out-loud takes on math education and the teacher’s lot. Out on Good Behavior: Teaching Math While Looking Over Your Shoulder is delightfully pithy (clocking in at a slender 94 pages) and filled with short chapters that bear titles like “The Prospect of a Horrible PD, a Horrible Meeting, and an Unlikely Collaboration.”

Throughout the volume, Garelick shares stories from his own experience that capture the state of math education and illuminate the frustrations of teaching math today. In one anecdote, Garelick recalls the professional development trainer who excitedly shared that students would be able to get credit on the test for offering a satisfactory explanation, even if they had the wrong answers. That posed a challenge, she cautioned: “Explaining answers is tough for students and for this reason there is a need for discourse in the classroom and ‘rich tasks.’”

When Garelick asked what constituted a “rich task,” she said: “It’s a problem that has multiple entry points and has various levels of cognitive demands. Every student can be successful on at least part of it.”

I quite like Garelick’s take on that indecipherable response: “Her answer was extraordinary in its eloquence at saying absolutely nothing.” I routinely hear from teachers and administrators who really, really wish they were free to say things like that in the course of staff meetings or professional training sessions.

This is the rare text in which an educator calls out the patronizing air of so many reformers and trainers. Recalling one conference where the moderator urged teachers to name their “super power,” Garelick drily asks the reader, “Why is so much PD steeped with the vocabulary that has teachers being ‘rock stars’ or ‘super heroes’?”

Garelick is stubbornly, even proudly, traditionalist in his takes. His approach to teaching negative numbers perfectly encapsulates his approach. He says, bluntly, “I do not like to prolong the topic.” He elaborates, “I once observed a teacher taking three weeks to teach it. The students had it down fairly well when the teacher introduced a new explanation using colored circles, causing confusion.”

Exasperated, one girl asked, ‘Why are we doing this?’” The teacher explained that, since the students had learned how negative numbers work, it was time to understand why they work that way.

Garelick recounts the student’s plaintive response: “I don’t want to understand!”

Garelick may be the only math author willing to publicly state that he thinks the student has a point. No fan of the Common Core or the broader push for conceptual math, he instead argues, “I’ve found that a lot of the confusion with the addition and subtraction of negative integers comes from giving students more techniques and pictorials than are really needed.”

At one point, he describes guiltily confessing to his supervisor that he’d attended a workshop session on the role of memory. She tells him, “Memorization is not a good thing.” She then asks, with some concern, “Was this person advocating it?” Throughout the book, one is frequently reminded just how much teachers who believe in phonics, math procedure, or memorization can feel like they’re moles struggling to escape persecution.

Garelick’s book is full of the kinds of things that teachers say privately but hesitate to speak aloud. Whatever side you’re prone to take in the math wars, Garelick’s wry reflections are well worth checking out.

Frederick Hess is director of education policy studies at the American Enterprise Institute and an executive editor of Education Next.

This post originally appeared on Rick Hess Straight Up.

The post A Math Teacher’s Frank and Funny Take on Math Education appeared first on Education Next.

]]>
49714797
California Moves to Replace Math With “Data Science,” in the name of Equity and Inclusion https://www.educationnext.org/california-moves-to-replace-math-with-data-science-in-the-name-of-equity-and-inclusion/ Wed, 15 Dec 2021 09:59:18 +0000 https://www.educationnext.org/?p=49714262 “Watered-down diet” would not give students fluency with algebra and an introduction to calculus

The post California Moves to Replace Math With “Data Science,” in the name of Equity and Inclusion appeared first on Education Next.

]]>

Illustration

For many years, Professor Jo Boaler has provided extraordinary takes on math education from her bunker at Stanford University in California. For example, there was the time Boaler claimed that a mistake makes your brain “grow” even if you are unaware that you have made one. Then, there was the time that Boaler claimed that it was terrible to ask students to memorize times-tables, suggesting she had never memorized times-tables herself and that this had not held her back as a professor of math education (a revealing admission). And we must not forget all those times Boaler has claimed that timed tests cause math anxiety and yet, when you follow the citation trail, the evidence to support this claim appears to be lacking. [Boaler’s book Limitless Mind was reviewed by Daniel Ansari in the Spring 2020 issue of Education Next under the headline “The Case for Limitlessness Has Its Limits.”]

Nevertheless, the good citizens of California decided to ask Boaler to help redesign their math curriculum.

First, a little context.

Boaler is probably best known for her “Railside” study. This study involves a long-term comparison of three schools, one of which, Railside, taught math in a “reform-oriented” way that avoided a focus on “executing procedures” and instead set-up problems that students could solve in a number of different ways. This sounds like a variant of problem-based learning. Although the wider evidence suggests problem-based learning is ineffective, Boaler found that, “Compared with the other two schools in the study, Railside students learned more, enjoyed mathematics more and progressed to higher mathematics levels.”

This is where the tale takes a dark turn. In 2012, Wayne Bishop, Paul Clopton and James Milgram, three Californian math professors, released a preprint paper (i.e. it was not peer-reviewed or published in a journal). Boaler never identified the schools in the “Railside” study, preferring pseudonyms, but Bishop et al. claimed to be able to identify them from the data presented in her paper. On this basis, they argued that Boaler’s conclusions were unsound. Boaler then published an article about “attacks by Milgram and Bishop,” alleging their behavior constituted bullying.

Whoever is right about the facts of this case, it does not take the kind of investigation conducted by Bishop et al. to note a fundamental flaw with the Railside study. Even if it really does show that students at Railside did better than at the other two schools, it is still a study of just three schools. As clearly described in the abstract, the schools are not even similar to each other. Unless you believe that every single reform-oriented school is worse than every single non-reformed-oriented school, the results prove nothing. I am pretty sure that if I was given the whole of California to choose from, I could replicate these results in reverse.

I am not claiming that Boaler chose the schools in order to obtain a particular set of results, I am claiming that the results she obtained could have arisen purely by chance and therefore prove nothing. This is why in randomized controlled trials, rather than having one or two items in each condition, there are usually tens or even hundreds.

And this is not the first research of its kind. Boaler conducted a similar study in London before she moved to the U.S. This time, only two schools were involved, “Amber Hill” and “Phoenix Park.” Again, Boaler claimed that the reform-oriented school was somehow better than the other one and again, this proves nothing.

Nevertheless, Boaler is a math-education rock star. Her Youcubed maths teaching website has a hit counter that reads over 54 million at the time of writing. Youcubed hosts many of Boaler’s papers, repeats her claim that “mistakes grow your brain” and her claim about the link between timed-tests and math anxiety, while also loudly trumpeting the concept of “Data Science.”

Boaler was a supervisor of Dan Meyer when Meyer was completing a PhD at Stanford University. Meyer is probably best known for his 2010 TED Talk in which he argues that the math problems we give to students need to be more open and less structured, giving fewer clues as to how to solve them and including extraneous information. For many years, the motto on his blog site was “less helpful.” This approach is at odds with what we have learned from cognitive science and is likely to overload students who are learning a particular set of math concepts for the first time. Nevertheless, in 2015, The New Republic declared that Meyer was, “The Man Who Will Save Math.”

The media cannot get enough of this stuff.

So, where does “Data Science” fit in? It seems that Boaler favors a problem-solving approach in math lessons where the teacher acts as a hands-off facilitator. I am sure this is for the right reasons and she genuinely believes it is more effective. And yet if you throw a quadratic functions problem, or even one requiring the manipulation of linear algebra, in front of students who don’t know the relevant math, they won’t know where to start. Even if you set it up so that the problem makes sense, the chances of them inventing the right algebra to solve it are slim. Those students who are already at a disadvantage will fail to learn.

Rather than question the premise that a problem-solving approach is best, an alternative is to change the math you want students to learn so that problems can be tackled by students with little knowledge and with minimal teacher instruction. Teachers who take a problem-based approach often stress that they are developing problem solving skills rather than well-defined procedural or conceptual knowledge. This accounts for the profusion of activities we often see listed in curriculum documents, such as the recent draft Australian math curriculum, where the elaborations include statements like, “using digital tools to create tessellations of shapes, including paver and tiling patterns.” Why would you do this? What is the purpose? The purpose is activity and the hypothesis is that engaging in math-type activity = learning mathematics.

Similarly, if you throw a bunch of data at students, they can group it, type it into a spreadsheet, plot graphs, and so on. It’s pretty easy to convince yourself they are doing something called “Data Science.” They don’t have to do anything particularly useful with the data because there are no right answers (and mistakes grow your brain anyway). This may explain why Boaler has pursued the objective of remaking the Californian High School Curriculum around “Data Science.”

To understand what that looks like, you first need to understand how U.S. states structure their high school math courses.

Unlike Europe, Australia and many other jurisdictions, the U.S. high school math curriculum (Grades 9-12) has evolved to constitute a set of discrete courses that address different aspects of math. The courses are:

  • Algebra I
  • Geometry
  • Algebra II
  • Precalculus
  • Calculus

The immediate problem this represents is that each of these five courses assumes knowledge of the previous courses in the sequence and lasts for a year. However, high school in the U.S. only lasts four years. If a student wants to complete Calculus before college, an effective prerequisite for many prestigious undergraduate science and math courses, the student and the school somehow need to speed the process up.

Traditionally, the way this is done is by allowing students to take Algebra I in the eighth or even seventh grade, something encouraged by a U.S. Department of Education push that seeks to expand access to Algebra I in the eighth grade. Not only does the draft Californian Mathematics Framework, co-authored by Jo Boaler, seek to discourage schools from offering Algebra I in Grades 7 and 8, it arguably seeks to supplant the traditional sequence of courses with “Data Science” units.

To those outside the U.S., the commitment to the traditional sequence can seem parochial. The contents of “Algebra I” are spread across multiple year levels in my Australian school, starting in about Year 7 and ending at around Year 10, although there is not a complete correspondence. However, it seems that any call to change the traditional sequence is generally used by reformers in the U.S. as a Trojan horse for changing teaching methods.

OK, so I have made some pretty big claims. However, if you want to know the evidence behind them, this document by Boaz Barak and Edith Cohen deals with these points in detail. Perhaps the most contentious is the idea that the new framework seeks to disrupt and supplant the old sequence of high school mathematics courses. In theory, the curriculum is intended to be neutral about this. However, video evidence that Barak and Cohen draw upon shows Boaler explaining that the authors have not updated the traditional sequence to make it as “vibrant and interesting” as the new Data Science pathway, “because it would give the message that both of these are equally important for kids.” Boaler also suggests that Data Science units allow for more “inclusive” and “equitable” teaching practices than the alternatives. This does not strike me as neutral.

Moreover, I am not convinced that the kind of teaching practices promoted by Boaler are inclusive and equitable. In the moment, they may allow students with little mathematical knowledge to interact with a problem alongside students with more mathematical knowledge. However, inclusion and equity should be about ensuring that class, race, gender and other personal characteristics are no bar to developing high-level mathematics skills. If you provide students in regular public schools a watered-down diet that does not give them fluency with algebra and an introduction to calculus, you do them no favors. These students will be competing for entry to college against students who could afford private tutors or private schools that can make up for these deficiencies.

An open letter signed by hundreds of mathematicians, researchers, software engineers and others with an interest in the mathematics curriculum makes this point clearly:

“Such a reform would disadvantage K-12 public school students in the United States compared with their international and private-school peers. It may lead to a de facto privatization of advanced mathematics K-12 education and disproportionately harm students with fewer resources.”

There is an obvious answer to this argument that reformers such as Boaler could reach for—colleges need to reduce their mathematics entry requirements. Yet, this just kicks the can down the road. At some point, advanced societies really do need mathematicians, scientists and engineers who are fluent in advanced mathematics. Those societies will either have to teach students algebra and calculus, regardless of whether such content lends itself to supposedly equitable and inclusive teaching methods, or they will need to import graduates from overseas who can do the math for them. If we can agree that at least some mathematical talent must be home-grown, why not grasp this nettle earlier, in Grades 7 and 8?

I am sure reformers such as Jo Boaler want the best for students. I am sure they are convinced that if we followed their advice, the world would be a more equitable place and a wider range of people, including those who currently think they are bad at the subject, would be engaged in mathematics for work or pleasure. This perhaps explains their missionary zeal.

But I just wonder whether they have thought it all through. Properly. The road to a mathematically impoverished future may be tessellated with good intentions.

Greg Ashman teaches physics and mathematics at an independent school in Ballarat, Australia and is the author of The Power of Explicit Teaching and Direct Instruction. He blogs at Filling the Pail, from which this post is adapted.

The post California Moves to Replace Math With “Data Science,” in the name of Equity and Inclusion appeared first on Education Next.

]]>
49714262
Math Concepts? Or Procedures? Best Answer Is Teaching Both https://www.educationnext.org/math-concepts-or-procedures-best-answer-is-teaching-both/ Wed, 21 Jul 2021 09:00:40 +0000 https://www.educationnext.org/?p=49713738 A math instructor writes in response to Barry Garelick

The post Math Concepts? Or Procedures? Best Answer Is Teaching Both appeared first on Education Next.

]]>

Illustration of a division symbol

I read with some dismay the response by Barry Garelick (“What It Takes to Actually Improve Math Education”) to Rick Hess’s interview with Andrew Coulson (“The Case for Game-Based Math Learning”).

Garelick unfortunately sets up a straw-dog contest between learning conceptual knowledge or procedural knowledge in mathematics learning. There is clear evidence that both are important and that they support each other, including in one paper Garelick cites, and in the classic book, Conceptual and Procedural Knowledge, edited by James Hiebert, published in 1986. The 2001 book, Adding It Up, written by the Committee on Mathematics Learning, established by the National Research Council, widely cited in our field, reported on five strands of mathematical   proficiency. One of those is procedural fluency. These classics underlie most contemporary views of mathematics education, including advice to teachers. Suffice it to say, almost no mathematics educator would deny deep connections between the two strands of knowledge, though there are differences of opinion on which should lead and when.

Consider the problem of developing procedural and conceptual understanding of division. An algorithm commonly taught, even mandated in many state standards for mathematics, is long division. Almost anyone who has completed a worksheet of twenty long division problems can tell you that practicing procedures alone does not miraculously result in a conceptual understanding of division. The more-complex-than-you-might-think concepts bundled together with understanding the algorithm include the role of the divisor and the dividend in getting the result, models for division, when division applies in solving real-world problems, interpreting quotients that are expressed as decimals fractions, and the role of place value and our base ten number system in carrying out the division algorithm. These aspects of the concepts of division are underpinnings that any students continuing in STEM education and careers must know and be able to use.

The role of instruction in helping students develop a multi-faceted view of division is clear. This instruction is very purposeful and the resulting knowledge useful.  What might that instruction look like? A rich task (not a bad term) is justifying that the long-division algorithm really works for any two rational numbers. There are many ways to do it, and many of these are accessible to youth. Comparing alternative algorithms, invented or otherwise, is an important strategy for developing understanding and a justification of why either works all the time. This kind of comparison is a “generic” skill worth teaching. When student engage in this kind of activity—justifying—they are legitimately participating in a practice that almost every mathematician does: creating an argument, or proof, for a given conjecture. (In this case, that the long-division algorithm works every time.) When students do this work, they are engaging in authentic mathematics (another term Garelick disparages). And not surprisingly, they must grapple with the conceptual aspects of division I note above.

My colleagues and I have written a book for teachers, Mathematical Argumentation in Middle School, published by Corwin, in which we provide an organized, rigorous approach to bringing this practice of professional mathematicians to the middle school classroom. It is based on a dozen years of research doing just that. We think it is worth a read for anyone who wants to break free of the conceptual/procedural tug of war.

Jennifer Knudsen is a senior mathematics educator at TERC.

The post Math Concepts? Or Procedures? Best Answer Is Teaching Both appeared first on Education Next.

]]>
49713738