For the second year, Dr. David Arditi, director of the Center for Theory, attended The World Forum on the Future of Democracy, Tech, and Humankind. This year he was part of The AI World Summit and presented on Education and AI.

 

 

FULL TRANSCRIPT

Kim Old: My name is Kim Old. I’m Chief Commercial Officer at IMotive, a neuroinformatics company advancing our understanding of the human brain through electroencephalography. Our objective today is to discuss how we can establish worldwide educational standards in the age of AI standards that are accessible, high quality, and ethically sound. Before we dive into our panel, we have the honor of hearing from our keynote speaker, Doctor Luis Sentis. He’s a professor in Texas and co-founder and scientific advisor at Apptronik. He’s an esteemed researcher at the intersection of robotics, AI and human-machine interaction, and his insights will help us framethe challenges and opportunities of AI-driven education. Please join me in welcoming Dr. Luis Sentis.

Luis Sentis: I’m going to have a brief keynote and I guess it’s a collection of random thoughts. I’m going to try to kind of connect them together. A question that I get often is from college kids and from graduate kids about how long a student should work. And I often hide it because the truth is that in elite schools, they have to work 60 or more hours. If I say that, either I get sued or the next question that I get is like why do we have to work more than normal people, right? And I was like, well, because you are in an elite school, but I’m not sure I buy into this proposition. With AI, we are transforming, and it’s happened to all of us. We’re becoming, I’m going to call it something akin to smart actors, instantaneously experts in areas that we never knew before. Now we know that quantum devices are ready to go, but they are not being commercialized because they don’t do error correction. It’s not that I understand it, but suddenly I’m an expert in quantum computing, right? And maybe we should embrace that. I don’t think there’s anything wrong with the fact that we

 

instantaneously know about many things. That also allows the transformation of education in the way that books are perhaps not written. When you start school, they’re written by AIs with a human in the loop, and that’s a source of satisfaction and creativity for the students. I think it’s making students feel that they are experts at least for a few minutes, and it can boost their confidence given the outrageous numbers of students that come from socioeconomic backgrounds that don’t allow them to have the focus. Boosting morale is very important. 30% of families are single parents, that gives you another statistic. Perhaps we’re also exaggerating and we’re still carrying the models of post World War II into the 21st century and we should start thinking about education outside of the classroom. The AI’s and the robots that educate people, those are excellent systems and frameworks where we can get education personalized past the school. Why not let kids do whatever they desire while going through a personalized curriculum? We only need the proof of work, just the same as we have in cryptocurrencies. I don’t have it quite figured out but they don’t have to develop everything, but they have to understand and they have to read at least what they write and that might be sufficient. Overall I hope that this meeting promotes lots of ideas and I’m really happy to be here.

Kim Old: Thank you, Doctor Sentis. I’m pleased to introduce our distinguished panelists. Doctor MonaDemaidi, is an entrepreneur, founder of STEMpire with strong grassroots experience in expanding educational access. We also have Doctor David Arditi, professor of sociology and an expert on how digital technology shapes culture, including streaming platforms and educational media. Doctor Gary Marcus is a renowned scientist, bestselling author and founder of robust AI and geometric AI. We also have Deb Adair joining us virtually. She’s The CEO of Quality Matters, an organization setting rigorous standards for online and digital learning. Thank you all for being here. Let’s dive into our discussion. Let’s consider the core principles and competencies needed to help communities break out of poverty and safeguard against authoritarian influences. What guiding standards should form the foundation of AI-driven education worldwide? Gary, from your AI and cognitive science background, which skills, critical thinking, digital literacy, civic engagement are essential to incorporate into these standards?

Gary Marcus: We need all of those things. We need education on them. I think we just got an education and how important civics is in my native land. I think people don’t quite have the same education in those things as they used to. I think there’s a serious question about whether AI can help with those things or not. I think in the long term, AI can probably help with all of the things you talked about – critical thinking skills, civics, etc. In the short term, we have to realize that LLMs are basically regurgitation machines, and they repeat things that sort of sound contextually kind of OK, but I think it is well known, as I forecast in the year 2001, they hallucinate a lot, and that is a problem for education. I have two kids, they’re 10 and 12, and when their teachers make one mistake, the whole teaching relationship can be undermined, particularly if the teacher doesn’t acknowledge the mistake. And I experienced that when I was a kid, when teachers would get things wrong, that really undermined the teaching relationship, especially when they denied it, and I experienced that too. LLMs are going to make a lot of mistakes. To be a really good teacher, what you really need is theory of mind. You need to understand where a student is coming from, why they make the mistakes that they do so that you can fix them. And LLMs don’t really have that. They fake it, they fake everything that they do. That’s a problem in terms of education. Where it leads, I think, is not that we should not use them at all, but I think we need controlled studies, and I haven’t seen a lot of controlled studies of LLMs. I think we need to stick with the notion that became popular, let’s say in the last decade of evidence-based acts, evidence-based medicine and education. We need to do that with LLMs. I haven’t seen a lot, and we have to look out for what is the long term relationship, how many mistakes are they making, what is the consequence of those mistakes. In principle, this notion of an individualized tutor is incredibly seductive and valuable, and I think we need it. Lord knows with education budgets, at least in the US being cut radically, it’s very tempting, but I think we also must be careful and ask for evidence.

Kim Old: Absolutely. Mona, how can these standards be tailored to address the unique challenges faced by women and marginalized groups in under-resourced areas?

 

Mona Demaidi: Thank you. It’s an honor to be with all of you today on the stage and in the room. I’ll start by giving just a little bit of background on what I’m doing and how I believe we could start addressing the standards themselves. I’ve been highly involved in developing the AI national strategy for Palestine, which is a developing country and giving the context of adopting AI is a challenge by itself. I’ve also been involved with UNESCO in terms of adapting AI readiness assessment methodology, which starts talking a little bit about AI and ethics with taking into consideration education, but if we look at the assessment itself, it’s super abstract. It just asks questions about what the percentage of women in AI is, what is the percentage of researchers in AI, without actually suggesting any kind of policies that we could work on. I’ve launched in the past 3 months, an application called IOPro, which is an AI powered application. I totally agree with you. It’s based on large language models. We’ve been struggling a lot to ensure that it doesn’t hallucinate. But what we did was, as an input we used the whole Palestinian education curriculum because we wanted to ensure that kids in Gaza and kids in refugee camps could actually have access to the curriculum given the very difficult circumstances. I do believe what we’re doing is challenging by itself because for us monitoring and evaluation, it’s still a challenge. And with all the trends that are coming, it’s going to even become more difficult and it even becomes more difficult when we start talking about standards. When I look at standards, for me, what kind of education do we want to achieve if we talk about standards? Are we looking at AI as an equalizer, which we could use to reach kids in marginalized areas to ensure that the knowledge is democratized? Is that what we want? Are we taking into consideration the culture, the language? It doesn’t make sense, for example, to develop one platform, an AI power platform for kids in Gaza and Afghanistan similar to the UK and US. So, we have to ensure we’re addressing that. Another thing that is going to be very challenging in terms of standardizations is the ethics. How are we going to ensure that the knowledge we’re providing these kids, the decisions we’re taking on their behalf, the way that we’re processing their data, is sufficient? These are challenges we have to start tackling when we start talking about standardizations. I’ll be more than happy to jump back into the upcoming questions on what suggestions we could make.

Kim Old: Thank you. I just was informed that we also have Leanda Barrington-Leach dialling in to join us. She’s the executive director of the Five Rights Foundation advocating for children’s rights in the digital environment. Having identified the key standards, the next challenge is governance. Should an existing global body like UNESCO or a new entity take the lead, and how do we ensure that regional nuances and local expertise are integrated into this global framework? David, do you want to start?

David Arditi: Well, I think, as the other speakers were speaking, I was thinking about what could we propose to make things better with AI, and I think that if it’s going to be used in higher education, one of the most important things we need to do is that people who are educators, who are in the fields where these technologies are being used or implemented, should also be part of the conversation in developing this technology. So, if you are developing something, like chatGPT perhaps we could have a law that requires that social scientists and area experts work with the engineers who are designing it to make sure that we try to eliminate particular biases and discrimination that get embedded in the algorithms by the programmers own biases, right? So, this has been a long time, long discussion about how algorithms end up discriminating, so we need to make sure that working our way up that we implement those kinds of discussions.

Kim Old: Deb, how do we include youth voices and safeguard their interests in a global standard setting process?

Deb Adair: I’m going to piggyback on what the previous speaker was just talking about in the sense of, I think it’s a mistake to think about a single set of standards that’s going to do the job here, especially in consideration that AI is in a constantly and rapidly evolving state. So I think at one level you do need somebody to talk about some aspirations and safeguards, in terms of global societal goals, that we all aim for and maybe that is the right entity for it, an approach like the SDGs for example. I think to make this useful and actionable, you really have to think about this as tiers of standards and safeguards that basically engage the people who are going to have to act on them so that they are implementable. To do that we have to be thinking about the spheres in which they’re going to be implemented and bringing in those voices, whether student voices are part of this too. I think that to envision this, it would be a mistake I think to come forward

 

with anything but a set of universal standards upon which we can pin other kinds of more actionable guidance and standards. What’s happening in higher education and even K-12, is that it isn’t serving students well now because there’s so much experimentation and students are extremely confused as to whether they can use AI, how they should use AI, why they should use AI, and you’re in that phase of experimentation. There’s a whole ecosystem of things that we have to build around, a set of standards or training to be able to really safeguard the practice. The speaker who talked about the evidence-based approach, that’s how we have always developed our standards, but I think we’re in a different time and place with AI and its rapid development.

David Arditi: I want to follow up and maybe contextualize what’s going on in education a little further. I know there’s probably voices that would disagree with what I’m going to say but in 2023 I was teaching an online introduction to popular culture class. I’ve taught it for over a decade, and as I’m grading these discussion posts, Istarted seeing words that I’ve never seen in 10 years that were being continually used. Not bad words, not wrong words, but almost every discussion post had the word thought provoking in it. Every one of my students was suddenly building their arguments around this, this and this, and it’s an introduction to popular culture class. I don’t expect them to be using it so I’ve kind of become the AI police and I find it numbing because I can’t be as candid with my students, maybe they’ll hear this on a recording and they’ll work their way around it, but I don’t want to tell them this is how I know you’re using AI because then they could shift a little bit. Absolutely, these are tools, I’ve used them myself, and I think that there are different types, but the LLMs for writing papers are problematic, but I also want to put that in context of K through 12. My wife is a high school teacher in the state of Texas, and they have state testing. Every student has to take a test. She’s an English teacher, so she must teach them to write in this really formulaic way. Well, when they implemented the AI grading, the city of Dallas said, well, hold on, we want humans to grade ours, or at least a percentage of them. Scores rose 15% by human grading. So essentially what they’ve already kind of modeled is, students need to write in the way that AI can read it, because that’s the only way for them to receive a high score and to graduate from high school. So there’s this back and forth. Now, I don’t think that. It’s technology doing it to humans, but I think that we as human beings write formulaically. AI kind of reproduces that in an even more formulaic way. It spits back at us mad libs. I don’t know if everybody in here is familiar with mad libs, but fill in the blank and there you go. My fear is the more we keep reproducing that cycle, the more formulaic, the more limited it gets and then 20 years down the road, what do those papers even look like or what’s even the reason for doing it because we could just throw our hands up for a second.

Luis Sentis: So would you challenge the model that the grading is a bilateral kind of cycle, which is we grade the students but the students grade us, so what we’re saying here is that this is an unbalanced, not well thought system, because you basically either give the easiest, homeworks and exams and then as a result, everybody gets an A, and then you do get an A as well. I don’t think that’s going to be the case, but I think students are going to be critical and they’re going to be demanding. There’s a large percentage that demands an education and education is not just giving them money for free, right? What you’re implying is that you can hack that by you, as a teacher, creating the content in AI and assuming they’re going to be using AI and that’s going to fulfill the only metric that we have for quality of teaching, which is the surveys that we have at the end of the semester. So, is that what you’re saying, that we have a broken system of evaluation?

 

Gary Marcus: He was trying to be diplomatic.

 

Kim Old: Gary, do you want to weigh in a bit more on that?

 

Gary Marcus: I’ll put some psychological context around it and add a couple of things. One is that we should be aware that LLMs can influence people’s beliefs and it can do it in subtle ways that they don’t know about. There’s good research by Cornell Tech on that. They can incept false beliefs. There’s good research by Elizabeth Loftus and others on that, and there were just studies showing that they can impair people’s critical thinking because we offload the critical thinking. Maybe that’s the one that’s most relevant here. You have this kind of giant circular feedback system now. There was a cartoon where somebody took some bullet points and asked an AI to write a long essay, and then in the next panel of the cartoon, somebody takes the long essay and turns

 

it back into the original bullet points. So, you have these kinds of cycles here and you’re describing a kind of cycle where things take on their own life and the goal of the educational system, which is supposed to be to educate students, for those who have forgotten, get completely lost in all of this. I mean, term papers were always terrible, right? As a piece of writing, you’re writing for somebody who actually already knows the material. It’s this very weird sense of audience. It’s a weird transaction, but that weird transaction has gotten even worse and I think the students are basically getting nothing out of it, roughly speaking. They write a prompt and then they stick the thing in there and then if they’ve gotten wise then they change a couple of words and you know. That will not appear in the surveys. Then you’re talking about – the students are getting nothing, they will write it down and they will complain. It was like the student was a zombie in the class, for 2/3 of the students, in a large state university which I once taught at. I won’t name names. I think that they’re there to fulfil some requirements in order to get to some further places, so I don’t think that those 2/3 care. I’m suggesting that one thing we can do in this era is, you can have the students use chatGPT or whatever, and then actually look at the output and see how many hallucinations. So, I suggested this on like a bunch of shows or whatever. Somebody posted a tweet, they said, I tried the Marcus method, which is to say I gave out the things to my students, and this tweet went viral because of the results. Theperson said 63 out of 63 students found hallucinations in the paper that came out, so that was kind of instructive. Then I’ve also said as a matter of writing that almost everything that ChatGPT writes is super bland and so you can sit with the students and say, well, what would be a more interesting angle, write the same thing but make it more engaging. So I think you can try to make it a teachable moment, but in terms of the students writing down their evaluations, I don’t think they’ll do it. I think, you know, 75% of the variance in there did get an A without having too much effort, so if you let them write their papers with chatGPT, then you’re probably, other things being equal, going to get a good evaluation. There’s how you can hack the system if you’re a teacher. A lot of this is not leading to any good place, and I think there’s some reconstruction to do with how to do it. Taking a step back and then I’ll yield the floor, there’s sort of one question, how could we build individualized tutors for 5th graders? That would be a fantastic thing because we don’t have enough teachers and we don’t have enough engaged teachers and so forth. So, in principle that’s a fantastic thing and then there’s what we have right now with AI, I think the dominant use in AI probably is writing term papers.There were these numbers for ChatGPT’s usage and it went up through May and then fell down over the summer and then came back up that September. That tells you something about how these things are actually being used. I don’t think it’s a great use right now for education, which is not to say that nobody’s doing interesting work. I think some people are, but the predominant use really is LLMs to write term papers. That’s not a wonderful thing for society, I don’t think.

Kim Old: You bring up a great point. If we rely on AI providers to deliver global education, we do need to address the risk of misinformation, bias, or even potentially political manipulation. What criteria and safeguards can help us choose trustworthy vendors and protect learners everywhere? Mona, do you want to start?

Mona Demaidi: I could catch up on that. I think one of the things we could focus on is having open AI models,which is not that easy. I had a personal discussion with chatGPT recently about ChatGPT EU. We were trying to bring it on board the university. And I really struggled up until now to understand and to realize why I need to subscribe with ChatGPT. Since I’m already a normal user, I just paid $20. That’s one of the struggles and challenges we’re having. I don’t think we, as educational institutions, are still aware enough how we could apply and deploy AI. We love the term – deploy it. That’s one thing, having open AI models is one of the things. It must be transparent. We must ensure that it’s also audited, but I’m not quite sure who should audit all the results. We have to ensure how the predictions are coming out, these are some of the challenges we’re facing off. And one more aspect, it’s who should regulate these vendors? Like, should we have something like the WTO to do so?Should we ensure that we have something like global certification which says that, yes, this is an AI open model which we could apply and adopt. So these are things that we have to start thinking about as well and it’s moving at a fast pace that we’re not even catching up with up until now.

Kim Old: Deb, I’d love to hear from you, from a quality assurance standpoint, what checks and balances must AI vendors adopt to ensure objectivity and reliability?

 

Deb Adair: Well, where I think this is going is that there will be basic tools, basic access. What I see already happening is sort of a, I hesitate to use democratization of access to AI, in terms of a non-technical user being able to create some agents for particular purposes. The way that AI really gets disseminated and used in education is academic institutions within their walls taking and creating specific agents to be able to support specific functions academic and otherwise. I think the focus is less on the vendor for that, although there are privacy considerations and economic financial considerations, but really it’s about education, the downstream users who are creating these agents, and that’s where a lot of focus is going to have to be to do it well. To the previous conversation about how it’s being used in higher education and education at large, I think term papers are a use there, but I think what it really requires educators. Go back and think about what it is students need to be able to know and do, maybe writing term papers isn’t one of those things, right? And if we’re going to create a society with not just digital literacy, but AI literacy, if students are actually going to have to know how to use AI in the workplace, then we need to kind of think differently about what we’re asking them to do, what does AI literacy look like, what does AI competency look like, and who’s going to teach that? And so, we have a whole educational system where our educators are all experts in their domain but not experts necessarily in how AI is being used in that domain, and I think this is sort of a challenge for us to do this well. There’s a whole aspect of how to use AI responsibly, ethically, in a way that serves society, but then there’s that next level of how do we actually implement this in an educational environment so that we’re actually doing good instead of fighting against the “we want students to do this one thing and they’re using AI so they’re not accomplishing any learning any outcomes”, so maybe we need to rethink what we need those students to learn.

Kim Old: Yes, David, you studied streaming cultures and digital platforms. Can we draw correlations and are there potentially cautionary tales that we can learn from the music or media streaming industries to avoid the bias in manipulation in AI educational tools?

David Arditi: One of the big things that I emphasize on is the way we keep adding more subscriptions, I actually call it unending consumption. If you go back, a theorist by the name of Michel Aglieta, French theorist or French economist, talks about the expansion of the means of consumption. Well, we’ve hit this massive expansion of consumption now where everybody has to have their own cell phone service, internet, all these expanding subscriptions that we have. I think that one place that we’re going to end up seeing this is right now chatGPT is free, and perhaps one way that we could regulate student usage of something like chatGPT to write their papers is if it actually costs money, maybe they wouldn’t subscribe or like it’s not quite worth it. What I can see happening, and there are plenty of these platforms, but there’s an expanding number of them, and what I see happening is the segmentation of different platforms from each other, and if you want to do particular activities, you have to subscribe to it. Then that extra level of economic generation actually starts to impinge on people to use good tools. I was talking to a friend of mine before I came here and he’s an engineer that does stuff with drones, he was telling me how they had a brand new employee and he was trying to figure out how to code whatever problem he was trying to fix. And the new employee came up and said, oh, well, what if we did it this way? And my friend looked at it and kind of went, I guess that’s OK, but this way is going to work a lot better. And he said, OK, I got that from some LLM tool, and he doesn’t even subscribe. So, there is a subscription for this tool, but he knows that the first choice is probably going to be the best, and the first choice is free. So, in what way does that end up limiting the realm of possibility if you go, OK, well, I need to code something, I’m not going to actually learn how to do it, I’m going to ask this AI model, and I’m not going to pay for the more advanced system that will give me multiple options. I’m going to just trust that the first one works. Because these things hallucinate so much, how on earth do we know whether or not that’s the best way to do it? All of a sudden you’re programming drones and those drones are falling out of the sky and hurting people, right? I mean, that’s an absolute possibility.

Gary Marcus: Parenthetical, we should not tell kids to stop learning coding. It’s really important that we still have people who know how to code. These systems are not that good. It’s easy to get an experience as Kevin Roose did in the New York Times the other day, where you do some dopey little thing, and you think that it works because you don’t actually understand. So, he had a system that would take a picture of a refrigerator

 

and suggest a recipe, and he got a kind of it looks good to me reaction like, oh, that’s impressive. If you thought it through, there are all kinds of problems, like, what if one ingredient is in front of another or you have a paint container, things that people who actually code would immediately see, but he didn’t see any of it, and he wrote this rave in the New York Times that kind of suggested we don’t need to train our kids to be coders. The Guardian did much better coverage. The Times had somebody implement or actually asked 7 people to use the same tools to implement Pac-Man, which is a game from 1982, which is incredibly simple to implement. I could implement it on my watch if I wanted to. All 7 of the attempts were failures, even something as simple as Pac-Man is actually outside the range of these no code things. And, when you learn to code, what you learn is to debug, to figure out what’s wrong, and to write good code you need to write code that can be maintained over time. Having people get this vague sense that I know how to code because they play with these tools, and then like saying, oh we shouldn’t educate kids in coding anymore, it’s a huge mistake.

Kim Old: I just want to interrupt; we have Leanda joining us from the airport. Thank you so much, Leanda, for joining us. I know you’ve just got about 5-10 minutes before you jump on a plane. I just want to kind of open it up for comments from you.

Leanda Barrington-Leach: So, we work on children’s rights in the digital environment, we specifically focus on childhood and on tech, and on tech in terms of how it impacts upon childhood and children’s rights. And I really wanted to join to say that you know, the use of tech in education of course has massive potential, but that it’s very, very important to be extremely careful with children’s education, the promise of tech in education so far essentially has not delivered, and there was a very interesting report from UNESCO, maybe 2 years, a year and a half or so ago, called the EdTech tragedy, which showed that, essentially there was basically no real evidence of improvements, or benefits, to children’s learning from all of the tech that had been rolled out, during and since the pandemic. So, I think it’s incredibly important to understand that childhood is obviously something that we shouldn’t play with. We shouldn’t be experimenting on children, and the use of technology in the classroom has obviously been driven, first by emergency, which is understandable, but also by, commercial purposes, taking advantage of the situation, which might be understandable, but if it doesn’t deliver any benefits to children and to learning, then obviously it should be very carefully considered. Education, obviously, is a right and it’s a public good, and it’s important that the primary role of technology and education should not be about commercialization, so collecting or selling data. This is particularly important now that we’re starting to talk about using children’s education data also to train AI assistants. I also just wanted to say that it’s very, very important when we talk about AI and how tech could revolutionize education. We thought now, AI is certainly going to fulfill that potential and revolutionize education, but this whole talk of personalized learning is also something that we need to be very, very careful, about and make sure that we’re doing it on the basis of expertise in learning and in pedagogy,because the way child experts, development experts and learning experts, what they are telling us is that, children do not learn through fully personalized systems, that there might be some very specific benefits in particular, for specific children, but generally we don’t learn through working in a system which has infinite patience, in which everything is proven to be personalized to ourselves. The way human brains learn is through struggle, and so the concept of personalized learning is very different if you talk to an education specialist as compared to if you talk to a tech or AI specialist. It’s just incredibly important that we put children first and we take the expertise in education and put that front and foremost, and not develop technology for education from a tech perspective, rather than an education perspective. We must be very, very focused on the outcomes, and that probably calls for some caution. At 5 Rights Foundation, we’ll be, over the coming year, working to have a national dialogue in the UK as a pilot project, on education technology to build a code for education and tech in schools and in the classroom more broadly. So more to come on this, we don’t have all the answers, but I just wanted to share the message. We see that tech is impacting children’s brain development, we obviously mustn’t supercharge that with AI, it’s a price that is not worth paying. I’ll stop there, but thank you, thank you so much.

 

Kim Old: We’ve seen international cooperation establish social and environmental standards through trade agreements. Could similar mechanisms ensure accessible high-quality education worldwide, leveraging AI to monitor and enforce compliance? I’ll open it up to Gary.

Gary Marcus: Sure, in principle, we do have some good global cooperative mechanisms on some things. Airlines are probably the best example. We have very good cooperation around what should be safe to fly, and we could do that, what’s safe to educate and whether we can get those agreements in the current political environment is an entirely different question, but I think it would be a good idea.

Deb Adair: It’s a really interesting question to apply that to AI, and you’re right. We started this 20 years ago and voluntarily we’ve had over 2600 institutions in K-12 and higher engage with us, but you know, the adoption of standards is a step, but in and obviously in and of itself it’s not enough, right? So, it is the socialization of those standards and how they must be adapted. Our little inside joke is that, if right now we’re working with 1300 institutions, there’s 1300 different ways in which those standards are used to apply, so you have to kind of have flexibility in there, but it’s true that is work to actually take those standards and have all the faculty and the staff engaging around it and making it work for them, that’s where the secret sauce is. We will certify, for example, the outcome for meeting those standards, but really the power is in all the work that’s happening around those standards at the institution, to see what that means for the online education that they’re providing, right? And so, I think that it’s proof that it can happen, but it doesn’t happen universally.

Gary Marcus: And I’ll just add one thing to that. The consequences take longer to develop, so you know some of it is close to life or death. It’s life or death for democracy, but it took a long time to see how bad our education system was in the US such that it would lead to that.

Luis Sentis: I think things are in fluctuation, both high school and college offers now are really a client sort of provider relationship. Tuition is very high and that means that we’re competing by our offerings and we’re offering more electives and now, both in high school and in college, we’re seeing people get degrees in a discipline but actually they haven’t taken courses on that discipline because of the number of choices they have. I face that now kind of more frequently. This happens a lot with AI, and I interact with mechanical engineers often, and mechanical engineers, which are people that normally work close to the structural machine, right? There is a strong attraction to deep learning machines and there is the choice and the offering from universities. And maybe in colleges as well right to now transform yourself into a person that has never touched a physical machine and you only have work on the concepts and models of AI, maybe industry acts as a regulatory body, when you face the industry and they hire a mechanical engineer, they’re expecting that they know transfer functions or something like that. All the concepts that are sort of universal, right? So, I’m hoping for self-regulatory, rather than imposing standards.

Kim Old: That does lead me into my final question before we break for questions from the audience. If a World Council on Education becomes a reality, who should lead it? How do we balance the roles of government, private sector, academia and civil society to really create a truly inclusive and representative body.

Mona Demaidi: It’s not an easy question, honestly. So, I think we need all of these stakeholders. Who’s going to define which? It’s not going to be an easy process, because again, we’re going to be seeing that kind of lower imbalance. I believe the governments could work together on maybe having the baselines, then we have academia and researchers to ensure that the models are actually unbiased, to hold the models accountable, then we have the civil society, which is very important, I believe, in terms of advocating for all the issues we’re gonna be having. The industries are the ones who are actually developing the models, so we have to ensure that they’re actually taking the ethical aspect into consideration, who’s designing the models, who’s testing the models from what is the data taken. In terms of LLMs, what is the generated data? It’s actually a long process, but all of them should be involved in the whole process. Thank you.

 

Luis Sentis: I mean the usual suspects, UNESCO, the G20 industry, citywide, statewide, citizens, and, I think, as teachers, educators, professors, that we live a little bit in an ivory tower. Noah Harari would love me, but I would involve philosophers as well, but not Western philosophers only.

David Arditi: I would just want to be specific and not say a World Council on Education but a World Council on AI and education, because I think right now there are too many hands, especially on higher education. K through 12 has government hands all over it in, in every country, right? Increasingly we see new hands trying to get on higher education and I think that everybody has a different way, has different epistemology, different way that we know things, and it becomes really high conflict about people who generally maybe politically agree with each other, but you start to get them into trying to regulate what higher education looks like and it’s World War 3.

 

Luis Sentis: I thought it was only the G7 that couldn’t take a decision. That’s a cultural, universal problem.

Gary Marcus: I’ll just add one thing to the opening list you gave, which included civil society, and I would put scientists there. I forget if you made them explicit. Balance matters, and what I have seen over and over again,watching AI policy, is that what we usually wind up with is like 80% industry, 10% well known government leaders like senators, and like one civil society person, locked in a corner, who has no actual power, and the balance is wrong. Marietta Shaka had a great tweet about Chuck Schumer’s first AI Insight meeting and she said it was as if you had a meeting on what to do about the environment and you had the leaders of Shell and Exxon, and one Greenpeace person, sitting in the corner. So I Was just riffing on that, and that is what I’ve seen over and over and over again. So yes, you had the right composition. Now let’s get the balance of that composition right.

Deb Adair: Well, for me I guess it depends on what the purpose is, and I think if it’s policies and standards, it isn’t going to happen. I mean, UNESCO has been trying for years just to get a global convention on the recognition of qualifications, and there are, you know, many countries that just will never signed on to, and I think the US is probably one of them, although it has more to do with how education is structured in the United States than anything else. I think if we talk about such a global forum as a body to address, I don’t want to be too silly with this, but you know, that kind of existential questions on AI and education and dealing with that, then we do want the kind of broad participation that I think folks here in this panel are talking about. I think, if we think that there’ll be a single global body who can set standards of policy, I don’t think that that’s going to happen, but I think in terms of providing direction and raising questions, aspirations and concerns, absolutely.

Kim Old: Thanks, Deb. I want to open it up to your questions.

 

Audience: I’m interested in the assumptions around reading and how that ties into artificial intelligence. If you take the US for example, one of the richest, most well-resourced countries on the planet, but 54% roughly of adults read below 6th grade level. So if that’s the problem for the US, what does it mean for the rest of the world? What does it mean for teaching in the artificial intelligence age?

Luis Sentis: Yeah, I think this disparity is huge, right? I don’t think you can treat the US as a homogeneous country. You know, we still have, you know, 50% overall poverty and higher levels of poverty, lack of attention, lack of basic education, so you don’t have to go too far, right? Maybe in the US, we tend to kind of look at ourselves too much as kind of like the golden goose, or the opposite, we think that we’re terrible, so we have this kind of bipolar personality. But in exchange, talking here to people all over the world and listening to the ways of educating people and incorporating technology, I’m really very impressed. AI arguably is the strongest right now in the US, but the creative use of AI in the classroom in the US perhaps is not up to the standards. So engaging and solving our problems, if we solve our problems at least internally, then we can go externally as well.

David Arditi: One big fear that I have, and this kind of goes along with writing your papers to be graded by AI, so that you can then have papers that are sufficient to the machine, I read recently, I think it’s the state of

 

Arizona, and each state in the United States controls its own education, and the state of Arizona, I believe they are creating a charter or private school charter schools, which are kind of public. Some entities pay for them and they’re free for people to attend, but they’re not public schools. It’s a crazy system. But they applied for a charter school to use AI to educate the students, so that they could spend less time in the classroom and spend more time on job skills. So for me, the whole plan there is to make the students less educated, so that they can go out in the working world and be an army of reserve labor that is capable of working on machines in a particular way, and literacy is not the issue. At that charter school, the students will be spending I think 2 to 3 hours on reading, writing, math and science, and then the rest of the day they’re working on financial literacy or welding, and they’re not even going to have teachers at these schools, they’re going to be coaches because they’re not qualified enough. They don’t have teacher certification, so what can they do? They can pay the teachers less money. Which drives revenue and profit using these machines. So, I think the overall goal is actually to lower literacy.

Gary Marcus: If everybody doesn’t already know Cory Doctorow’s wonderful enshittification, that is a good example of the shittification of education.

Audience: Malcolm Byrne is my name. I’m an Irish Member of Parliament, but prior to going into national politics, I worked for our higher education authority, which was the funding and regulatory agency, so I’m now going to be doubly unpopular with academics, you know, both a politician and did work with the regulatory agency. But I’m going to respectfully say we’re almost coming at this from the wrong direction, because I think our challenge is, what is the purpose of education and I do think, by the way, that’s why we would have a difficulty trying to get a global council on education because, as a national conference, it’s not like airlines where everyone agrees we need to fly safely from A to B. But for me, the purpose of education is around the acquisition of knowledge, andabout how that knowledge can be used to better improve our communities, our societies, or in the case of research, a particular discipline. I’m less worried about, you know, this is actually for term papers. That’s no different to in the past where students would have got other students or graduates to write term papers for them or even going to grind schools or whatever. It does though lead to the question around assessment and where we use AI in terms of assessment, and how we use AI as a technology, as a learning tool, because for me, and I get there’s loads of hype, it’s no different to the introduction of the scientific calculator. You know, people thought, oh, this would be the end of math as we ever knew it. You know, mobile phones, what impact will they have on learning, and I mean I’m even conscious, 3 of the panelists are using mobile phones, you know, in terms of as notes to support it. So, AI can be an enabling technology, and I think it’s great it would do that. I do think and it comes back to the questions though around digital education, media literacy, and it’s about ethics, because the issue is not about students having any problem being able to use the technology. It’s about being able to understand the context in which it is right to use it, when there should be attribution, what about sharing of data and so on. And then just one final question, which is this concept of a sovereign large language model. When we’ve been talking about artificial narrow intelligence in the last session. Norway has invented the sovereign using the Southern Irish language model using trusted sources of data from the universities, established media, government records, and for citizens to be able to use that for information and is it something there we could see collaboration among education providers so that we know that the data that’s going into this LLM is trusted.

Gary Marcus: It’s amazing that we didn’t use the term AI literacy all day and part of what you’re talking about is that, and I think it’s absolutely urgent and that extends to like knowing when you should trust them and when you shouldn’t and so forth, on the sovereign models, the choices of data sources haven’t made as much difference as Ithink people thought they would like in terms of political leanings and so forth; that may change. Right now, it hasn’t made a huge difference. Even if you have perfect data, the systems will still hallucinate so it’s not a complete solution to that. I don’t know whether the efforts to make the sovereign models will repay themselves or not. I think there’s also a political aspect, which is, we’re giving a lot of power to a small number of companies we don’t really need to, given that everybody is using the same recipe. And a lot of this stuff has been open sourced recently, particularly by China. So maybe countries should be making sovereign models simply for economic and political independence, not so much because the results are

 

particularly different from one of these models to the next. They kind of all have the same problems, but all have the same utility; why should you surrender all your data to open AI, keep some of it in your own nation. I think there’s a reason to pursue it even if the content of the models isn’t ultimately that different, but why pay for it when you can get it for free, and why surrender your data when you could have better policies locally, so I guess I’m more in favor of that than I used to be.

David Arditi: And to follow up, kind of where you’re going, I think that the number one reason, especially for higher education, is to make citizens or people within a country, educated enough to participate in the public sphere, to participate in democracy; that is a cornerstone of democracy. To that end I’ve created a platform that,it’s just beginning, called Free Knowledge and the idea is to give university education to people through YouTube videos, not to promote YouTube and Google, but we have this barrier in society and I think this is where a lot of the misinformation and disinformation comes from, which is we don’t have to get to AI literacy, we don’t have media literacy. What is good information, right? We have all this stuff going on because people are going, whoa, we can’t trust science, and that is the opposite of an educated citizenry, and it’s intentional, as Bill Clinton was saying yesterday. We need to push back against that, so I agree, we need AI literacy, but before we can even get to AI literacy, we need media literacy. We need people that can critically analyze content and say, is this good information or is this bad information? And I don’t think we can get to AI literacy until we have media literacy.

Kim Old: Yeah, we’re definitely at a breaking point in terms of everything under attack from science to the media. We’ve got one more question behind you.

Audience: One thing that I’m quite surprised hasn’t been brought up is, society in play. So, when it comes to the role that education plays, it’s not just the passing on of knowledge, it’s understanding context, how that knowledge is applied and what your role is within the application of that knowledge, because I think sometimes one of the reasons why, when it comes to the lack of media literacy, it’s because people don’t understand how people get to know what they know when they tell you, and so they feel personally, in terms of their own entity, threatened, and they feel they have to pick a side rather than like, look, you know what you’re talking about, you’re telling me, I understand why you are an expert and therefore I can make an informed decision as to how I choose where to put my trust. So bringing it back around to play in AI, rather than keep on talking about it like very intellectual knowledge, kind of context of education. Within an environment whereby you are allowing learners to have control over the application of the design of play, or the use of play with AI, depending on what stage they’re learning, gives you not only an understanding of what you’re capable of, what it’s capable of, how it is created, and then that further down the line means that you are creating citizens, people who understand what it is.

Kim Old: Thank you for that.

Audience: Yeah, well, it occurs to me that one of the other purposes of education is socialization. That’s certainly why I sent my toddler to daycare, you know, like that from the beginning all the way through to higher education, there’s a need to work collaboratively with other people, not just collaboratively with machines, and just the other way. I sort of want to ask the panel to think about it, perhaps to provide some response, to think about how you ensure that humans remain in education, in a qualified sense, and are engaging with people. The reason I say that is one of my colleagues did some citizen science research in Australia around nurse’s uses of AI in hospital settings, and asked them to look at how they used it and then critically reflect on hindering their work. One of the conclusions that they drew was that there was efficiency gained by using various sophisticated technologies, but they were very protective of what is a legislated nurse to patient ratio that currently exists, because their concern is this will be used as a justification to diminish or to alter that ratio, and require greater efficiency from nurses through offloading tasks to AI. Now I think that that is one way to think about as well, that you shouldn’t be offloading work onto machines so that you can reduce the human component, because socialization is an important component of education and facilitating that collaboration with experts, whatever teachers, experts in education is critical. In the end, what happens though is that with the decline of unionization, I mean, potentially in the US at least, kind of occupational

 

health and safety that might be the vehicle through which you could have this discussion, there is a problem in terms of engaging those stakeholders in the process like with very low rates of unionization, what is the other to bring in workers, who are workers in these fields that deliver these services for the public at large to ensure that voice. Is there an alternative to that vision of union involvement or organized workplace involvement or kind of understanding of how to incorporate this technology in a safe way in a very granular practical manner?

Luis Sentis: It is your choice to choose the level of college, right? If you sign up for top 20-30 colleges in the US, you’re probably going to work much more than a normal worker, right? So, if you’re going to work less, go somewhere else, right? But I do completely agree with you. I do advocate for socializing and limiting the amount of time overloading. Then it prevents you from all these older experiences. It would be interesting to have a union of students, but many universities in the US are private universities, right? How are you going to put a union there? Public universities, they already have these cycles and the student associations have a lot of say and they are steering the university where they want to bring it, even sometimes having more power than faculties, but at this moment my personal take is that we do overload students with too much work, and I rather see kind of a united sort of front of what is the curriculum, how many hours work, and then students kind of have more time socializing, doing activities, theatre, traveling the world and so on and so forth, except that you know many of us went to rich universities and then we have plenty of other things to do, but there is many students that they just don’t have anything to do, they don’t have anything to do except going to class, right? So that must be also we uh weighted in if you will. Anybody else want to comment on that?

David Arditi: I find it interesting. Luis and I are in the same University of Texas system, but we’re at radically different schools. We’re a working-class, first-generation student university, we’re a Hispanic serving institution, and many students, all they want is to get that degree. They don’t want to do the work. That’s why, what’s the effect if students feel like you’re shortchanging them? Well, they want me to shortchange them. The more I push back and try to force them to learn, that’s when my ratings go down because I was too hard. So, that’s always kind of a back and forth. So, I think that the one thing to keep in mind is there’s always these different interests, and trying to wrestle with, you can’t have a one size fits all solution to any of this, right?

Luis Sentis: I think that’s the solution, if we have the capacity and we have the money, we should not homogenize the whole degree, right? I think we have the capacity of addressing, you know, one by one, we’re not in 1946 just coming out of the war, right? We’re now in another stage, there is much more support, much more tools for communication. I communicate with the students one by one, if a student doesn’t come to class, I write him an email and, and then we have a communication, so we have this capacity now to do these things.

Kim Old: Thank you, everyone for joining us both in person and virtually; thanks Deb and Leanda. I encourage you to continue the dialogue in your networks and stay connected. This is apparently an evolving and quickly evolving, and important conversation. Thank you.