The future is here and I’m scared.
Emma's Notes #56
If you’ve missed the exponential boom we’ve seen in AI in the past few months, you probably live under a rock. OpenAI launched ChatGPT on November 30th of last year and showed us all: the games have changed.
In the meantime, DALL-E has fundamentally disrupted the design and photography market. And who thinks it will take a while for these innovations to become mainstream is wrong. My Instagram is flooded with AI-generated pictures. Design platform Canva already has an AI image integration. And this morning I woke up to an e-mail from Notion telling me that I am off the waitlist for Notion AI. Since then, I can ask Notion to write me a text, help brainstorm, translate to 14 languages, use simpler language, summarize, make my text longer/shorter, find action items, and more. Want to see it?
Here’s the above text in Spanish (with just one control switch):
Si has perdido el auge exponencial que hemos visto en la Inteligencia Artificial en los últimos meses, probablemente vivas bajo una roca. OpenAI lanzó ChatGPT el 30 de noviembre del año pasado y nos mostró a todos: los juegos han cambiado. Pero OpenAI no fue el único que causó sensación. Mi feed de Instagram estaba inundado con imágenes generadas por AI, creadas por aplicaciones como Canva, que tiene un generador de imágenes AI integrado. Esta mañana me desperté con un correo electrónico de Notion que me informaba que había salido de la lista de espera para Notion AI. Ahora puedo usarlo para escribir texto, hacer brainstorming, traducir a 14 idiomas, simplificar el lenguaje, resumir, hacer el texto más largo o más corto, identificar acciones que deben tomarse y mucho más.
And here are 3 dangers of AI in 50 words:
The dangers of AI include the increase of inequality, the limitation of human creativity and the possibility that AI may become autonomous and make decisions that harm humanity.
As I’m doing this, my body fills with both excitement and anxiety. And those two things I’ve been feeling for a few months now when thinking about AI.
When ChatGPT got released, for the first time in my life I didn’t feel excitement over technological advancement. Instead, I felt fear. I felt the same when I saw an advertisement for the Rayban Stories smart glasses: deeply confronted with the feeling that the future is here quicker than I thought it would be. A fear that I’m not ready yet, that we are not ready.
At first, I shut up about it, because I felt ashamed and silly. Wasn’t I supposed to embrace the future as a young, intelligent, progressive woman? When I carefully started to share with my friends, I got similar reactions though.
One of my friends, who works as a data scientist, shared that he always thought that his skillset wouldn’t be the first to be outperformed. But realistically, that is exactly what is happening. While we thought by now no one would be driving cars anymore, we are dependent on Uber drivers. And while I thought that my writing skills, creativity, and strategic insight made me useful - AI is proving me wrong.
This is not to say that we are right now outperformed. Because anyone who has played around with ChatGPT and DALL-E knows: they are certainly flawed.
For example, this is the list of action items the Notion AI created based on my text:
[ ] Fix spelling mistakes in the generated text
[ ] Summarize OpenAI and other companies' AI-powered applications
[ ] Identify action items with Notion AI
That’s not super useful and something a 15-year-old with reasonably developed executive skills could probably do a lot better. These flaws are just a matter of time though. I remember playing around with GPT2 in 2019 and GTP3 in 2020 (ChatGPT’s younger siblings) and thinking: this is cool but it will take a while before it will actually be user-friendly. Well, a while was only 2 years. And because this growth is exponential scope bias prevents us from thoroughly understanding how fast this is going.
But we also shouldn’t underestimate the power AI already has. ChatGPT has passed bar exams for business and law schools. It even passed the text-based portion of the US medical licensing exam. Now, we can have a big discussion about whether this means ChatGPT is super smart, or whether these exams do not actually require fundamental learning and understanding.
At my university, these discussions have been virtually constant. Our program heavily relies on self-study, combined with discussion-based classes. We never take tests, but rather have to hand in original assignment work. Right after ChatGPT launched, many students I know started using it. Some professors even encouraged us to use the tool to debug our code. At the same time, students have been flagged down for the submission of non-original, AI-generated work. The problem of students submitting AI-generated work disguised as their own might have been fixed: OpenAI itself released an AI-text qualifier this week.
Even though this might put a stop to students submitting AI-generated work, we still don’t have an answer to the question: is using ChatGPT’s work plagiarizing? Some people say it is, because obviously: you didn’t write it yourself. Others find it important to factor in the relationship between the quality of input and output. What you get out of the tool is about as good as you put in. Meaning, if you have a fundamental understanding of a topic, you can ask the AI a great question and direct it to improve its output. If you have no clue whatsoever though, you won’t be able to correct the output, which will leave you with a bad text and a net learning effect of 0.
AI is here to stay. Which tasks us with finding answers to difficult questions. For now, most students I know agree that using without critical thinking will only undermine your own learning. You are wasting your own time, that of your fellow students, and frankly: a lot of money too. (Can you tell what my opinion is 😉)
However, this is at a university with a strong focus on the development of metacognition, at which students know the importance of agency over their learning process. I doubt that the average opinion will be the same at universities where you sit in a lecture hall for weeks, then crunch for an exam, and memorize things you’ll never use again.
Underneath the surface of all these questions on how to deal with AI in schools and universities is a bigger question:
What is still useful to learn and do?
For the first time in my life, I’m actually not super sure. I have some intuitions about emotional intelligence, spiritual development, forming meaningful connections, learning to be of service to others, and learning to collaborate with AI. But I also have a lot of questions.
How we are protecting each other from the discriminatory and limited biases AI has? How we are going to value and pay each other for labor when AI can do so many things so much faster and better? If usefulness becomes less relevant, what should we then value? Or what do I want to value?
For now, I have one answer: it helps to write about these doubts. It helps to talk about these fears. I encourage you to do the same.