Amazon, Data Points, and The Train to Perry
I have had an account at Amazon.com for at least ten years, so I am not naïve when it comes to understanding that they have collected a pile of data points about my buying habits. I’m used to having my inbox cluttered with recommendations for other items related to things I have bought in the past. Now it’s not just what I buy, but what I search for. Amazon must have an army of computer programmers skilled at writing algorithms to analyze the online activity of all of the millions of people who frequent their website. And now they’re partnering up with credit card companies to share my reward points. It won’t just be what I buy on Amazon’s site that they will have access to; they’ll have the data trail for anything I use my credit card for!
I understand that this loss of my privacy is the price I pay for the convenience of buying online. I don’t like it, but to this point I’ve been willing to tolerate it in order to make my life easier. I guess I’ve come to accept that those sophisticated computer algorithms will just know everything there is to know about me—but maybe not. I began to suspect that maybe those algorithms were not as sophisticated as I thought when I received a typical Amazon recommendation last week. It was a recommendation to buy my own book. “Yes, Barbara McClanahan, based on your previous buying choices, we thought you might like The Train to Perry by Barbara McClanahan.”
At first I laughed. And then I decided, maybe it isn’t so funny after all. In fact, this email has come to symbolize for me everything that’s wrong with the expectations for Artificial Intelligence (AI) and Pearson’s version of personalized learning. Peter Greene posted a great blog on this subject a few days ago, so I won’t rehash it here. But let me explain why I think this email exemplifies the fallacy of AI and adaptive computer programs to teach our children.
Whatever computer program was written that reviewed my purchases of children’s historical fiction was not skilled enough to figure out that the person they were recommending my book to had the same name as the author of the book. If a human being had been asked to check this recommendation, that person would have taken a second look, noting the unlikelihood that the author and potential buyer would have the same name. A human being would likely have caught the “coincidence,” but the computer didn’t because the human who wrote its program had never anticipated this possibility.
Humans are both the limitation of AI and their power over it. No one human programmer—or even a group of them—can possibly think of all the possible eventualities that their computer will face as its programs run. But one human being, by virtue of the way humans think, i.e., looking for unexpected patterns, will eventually “out-think” a computer. Computers can never be any greater than the programs that are written for them. And the programs that are being written to teach our children through “personalized” learning will be based on one perspective and one perspective only—that of the designers of the program. Children will be “processed” through the system without opportunity to think outside the box. Yes, that can happen in any school that has a highly scripted, proscribed curriculum, but with “personalized learning,” it’s a done deal. Critical and creative thinking cannot survive in such a thinking-starved environment.
So, Amazon, no, I do not wish to purchase a copy of my own book; I have several already. And you don’t really need to keep sending me those recommendations; I’ll do my own thinking, thank you very much. By the way, are you related to Pearson?