Generative AI and Education

It seems not more than logical, that we in our blog also give some attention to Generative AI, right? As AI being this nice bling bling 'new' technology invented by 'OpenAI'. Yeah, right. When I was developing my Colony Prediction application in 2005–2006 I couldn't have thought that almost 20 years later I would be writing on a blog that AI was new. You guessed it by now, probably right. OpenAI did not invent AI, nor it is new technology. And yes, my Colony Predictor was an AI application, and yes it was successful. So successful, we got a scientific publication out of it in 2008 (https://pubmed.ncbi.nlm.nih.gov/18778376/).  

But in this blog I want to focus myself on Generative AI in relation to education. So we are going to discuss/answer a bunch of questions. And to make this blog into a bit more fun to discuss, we are going to use generative AI to answer this questions and then discuss them. Fun right? 

Should you be afraid Of AI?

No, you should not necessarily be afraid of AI. Artificial Intelligence is a powerful tool that can greatly improve efficiency and productivity in various fields. However, like any other technology, it has its potential drawbacks if misused or abused. It's essential to ensure responsible development and deployment of AI systems while addressing concerns related to privacy, security, and the impact on employment. As you can see in the response of the Mistral LLM (which I used for all the answers, because Duh Free and Open Source), the LLM is clear that everything has drawbacks, but that we already knew. So we need not be more afraid, but be cautious.

How can you defend the quality of your examinations against AI?
To defend the quality of your examinations against AI, consider implementing these strategies:

  1. Diversify question types: Use a mix of short-answer, essay, and multiple-choice questions to assess different skills and knowledge levels. This makes it more difficult for AI to predict answers accurately.
  2. Include open-ended questions: Open-ended questions require critical thinking, analysis, and interpretation that are harder for AI to replicate accurately.
  3. Implement anti-cheating measures: Use tools like plagiarism detectors, online proctoring services, or invigilators in exam centers to deter students from using AI assistance during the examination.
  4. Regularly update questions: Frequently refresh your question bank and examinees' study materials to keep up with current knowledge and make it harder for AI to predict answers based on outdated information.
  5. Assess higher-order thinking skills: Design questions that test problem-solving, critical thinking, creativity, and judgment, which are more challenging for AI to replicate accurately.
  6. Encourage active learning: Foster an environment where students engage in discussions, group projects, and hands-on activities during the course of their studies. This helps them develop skills less susceptible to AI assistance.
  7. Continuously monitor and adapt: Regularly review your examination's effectiveness and make necessary adjustments based on feedback from students, teachers, or external evaluators.
  8. Collaborate with experts in AI: Engage with professionals in the field of artificial intelligence to stay informed about advancements and potential threats, allowing you to proactively address these challenges.

All interesting options to defend the examinations are interesting and for sure in majority correct. You could also use AI detection tools, to discover if and how the students has used AI tools. In my personal experience, it only works good when the student was really, really lazy. Like just using the AI result for 100%. But what if the student has used the AI and rephrased lots of the text? Will the AI detection tool discover it? In my experience, often not. But what to do then? What's wrong with putting the students in a classroom with a professor and let them write live. Or organize an oral defense. Or what about a poster presentation. They cannot AI fake an oral defense. 

How could you use AI to your advantage, speaking from the position of the professor?

As a professor, you can leverage AI technology to enhance teaching and learning in several ways:

  1. Personalized Learning: Use AI algorithms to analyze student performance data and provide personalized feedback or recommendations for improvement. This helps students focus on areas where they need more attention.
  2. Intelligent Tutoring Systems (ITS): Implement ITS that can adapt to individual learners' needs, providing customized learning experiences based on their abilities and preferences.
  3. Automated Grading: Utilize AI-powered grading tools for objective assessment of multiple-choice questions or essay responses, saving time while maintaining consistency in grading standards.
  4. Predictive Analytics: Employ predictive analytics to identify students at risk of falling behind or dropping out, allowing you to provide targeted interventions and support.
  5. Enhanced Course Delivery: Incorporate AI-powered tools like chatbots for instant feedback, virtual assistants for administrative tasks, or adaptive learning platforms that adjust content based on student performance.
  6. Improved Accessibility: Utilize text-to-speech and speech recognition technologies to make course materials more accessible for students with disabilities.
  7. Content Creation: Use AI tools like natural language generation (NLG) to create personalized feedback, summaries or even generate new content based on your input.
  8. Collaborative Learning: Encourage collaborative learning through AI-powered discussion forums and group projects that facilitate peer-to-peer learning and knowledge sharing.
  9. Research Assistance: Leverage AI tools to analyze large datasets, identify patterns, or assist with literature reviews in your research endeavors.
  10. Professional Development: Utilize AI resources to stay updated on advancements in your field, improve teaching strategies, and collaborate with other educators worldwide.

By incorporating these AI applications into your educational practice, you can create a more efficient, effective, and personalized learning environment for your students.
A lot of stuff to think about. Do I miss something in the answer of the AI? Yes, I suppose I do. One of the best strategies in my opinion is using AI! Instruct the students to use AI for an assignment and then let them manually research the outcome. Let's do history here: Write a biography on Napoleon Bonaparte, using generative AI and research the outcome on its merits. Let them research the results of the AI.

Why is the quality of generative AI dubious?

The quality of Generative Artificial Intelligence (AI) can be considered dubious due to several reasons:

  1. Limited Context Understanding: Current AI models struggle with understanding context and nuances in language, which can lead to incorrect or nonsensical outputs when generating text or images.
  2. Hallucinations: AI systems may generate false information based on patterns they have learned from the training data but are not accurate reflections of reality. This is known as "hallucination" in generative models.
  3. Data Bias: If the training data used to train a generative model contains biases, these will be reflected in the generated outputs, potentially perpetuating existing prejudices or misinformation.
  4. Security and Privacy Concerns: Generative AI can create convincing but fake content, posing risks for security (e.g., deepfakes) and privacy (e.g., synthetic images of individuals).
  5. Overconfidence in Models: Relying too heavily on generative models without proper validation or quality control measures may lead to incorrect assumptions or decisions based on their outputs.
  6. Lack of Common Sense: Current AI systems often lack common sense and the ability to reason about real-world situations, which can result in flawed generation processes.
  7. Continual Improvement Dependency: The effectiveness of generative models depends heavily on the availability of high-quality training data and continuous improvements in algorithmic techniques. This means that their quality is subject to external factors beyond our control.
  8. Evaluation Challenges: Assessing the quality of generative AI can be challenging, as it often involves human judgment and may vary depending on context or individual preferences.
  9. Overreliance on Technology: Relying excessively on generative AI without considering its limitations may lead to a devaluation of critical thinking skills and human creativity.

While generative AI has shown impressive capabilities, it is essential to recognize these challenges and work towards addressing them through ongoing research, development, and responsible deployment. In this the AI gave a very complete answer, mostly correct, but I need to emphasize more.  The last three years I have given many lectures on data management. From these lectures, I learned that 100% of the organizations, I have spoken to, have data quality issues. The same data that will be used to train the models. But if you put S... in, you get S... out, right. This is no different with AI LLM. So if 100% of the organizations have problems with the data quality, what does that tell us of the quality of the LLM's. 

What can we expect from generative AI in the coming year?

In the coming years, we can anticipate several advancements and developments in Generative Artificial Intelligence (AI):

  1. Improved Model Performance: Expect further improvements in model performance as researchers continue to develop more sophisticated algorithms and training techniques. This will likely lead to better text generation, image synthesis, and other generative tasks.
  2. Wider Applications: As generative AI becomes more accurate and efficient, we can expect it to be applied across a broader range of industries and use cases, such as content creation in media, personalized marketing, or even scientific research.
  3. Greater Integration with Other Technologies: Generative AI will likely become better integrated with other emerging technologies like edge computing, quantum computing, and blockchain, leading to more powerful and efficient generative systems.
  4. Enhanced User Interfaces: We may see the development of more intuitive user interfaces that enable users to interact with generative models in a more natural way, making it easier for non-experts to leverage these technologies.
  5. Increased Regulatory Oversight and Guidelines: As generative AI becomes more prevalent, we can expect increased regulatory scrutiny and the development of guidelines aimed at ensuring responsible deployment and addressing potential risks associated with this technology.
  6. Greater Focus on Explainability and Trustworthiness: Researchers will likely continue to work towards improving the explainability and trustworthiness of generative AI systems, making them more transparent and less prone to generating misinformation or perpetuating biases.
  7. Continued Debate over Ethical Implications: The discussion around the ethical implications of generative AI is expected to persist as society grapples with questions related to privacy, security, and potential societal impacts.
  8. Emergence of Hybrid Systems: There may be a rise in hybrid systems that combine Generative AI with other types of AI (e.g., Reinforcement Learning) or traditional programming techniques to create more versatile and adaptable solutions.
  9. Expansion of Open-Source Resources: The open-source community will likely contribute significantly to the development of generative AI, providing accessible tools and resources for researchers, developers, and enthusiasts alike.
  10. Blurring Boundaries between Human and Machine Creativity: As generative AI continues to advance, it may become increasingly difficult to distinguish between human-created content and machine-generated outputs, leading to a blurring of the boundaries between human and artificial creativity.

Of course, generative AI will improve. At this moment, generative AI is a bit hyped. More and more companies, schools and universities are lured by 'fake' specialists who want to help you, to not miss the boat. In this context, I like the Dutch saying (might be because it's my mother's tongue): "In het land der blinden is eenoog koning". In the land of the blind, one eye is King. Meaning that when almost everybody has no knowledge of a subject, the persons with a bit of knowledge are really the alpha monkey on the rock. It's too bad that some of them take advantage of this information asymmetry. So take my advises and use them as you see fit. 


FOSS guidelines