Artificial Intelligence (AI) is everywhere. From search engines that predict your query before you finish typing to AI-based chatboxes as an entry point for customer service, AI is embedded into our daily lives. For those of us in STEM (Science, Technology, Engineering, and Mathematics), this evolution brings both excitement and caution.
In academic STEM environments, AI is no longer a buzzword: it’s a tool actively reshaping the research process, the classroom experience, and even how we collaborate across institutions. Used thoughtfully, AI can increase productivity, uncover new research avenues, and support learning in innovative ways. But it also comes with important caveats, from privacy concerns to the erosion of original thinking.
Here, we explore how AI is being used in STEM disciplines today, and how academics can apply these tools, while staying alert to its risks.
Practical roles AI can play in STEM and STEM education
Here are a few ways in which I have found AI to support teaching and research in STEM:
Debugging code quickly and efficiently
Whether you’re teaching a Python course or running complex simulations, code debugging is an unavoidable part of the job.
AI tools like GitHub Copilot or ChatGPT’s code interpreter can analyse snippets of code and point out syntax errors, logic issues, or even suggest optimised alternatives. This can be especially helpful for early-career researchers or students still getting comfortable with programming, and can take the guesswork out of very cryptic error messages.
However, caution is needed. AI-generated suggestions may be incorrect, outdated, or insecure. Always double-check outputs against reliable documentation.
Transforming spreadsheets into actionable workflows
For many academics, spreadsheets are the backbone of administrative tasks: tracking article submissions, managing grant budgets, or monitoring student progress. But these static tables can quickly become overwhelming.
With AI tools, you can turn these spreadsheets into dynamic action items. For instance, a tracking sheet for journal articles can be parsed by AI to assign tasks to different editors and highlight bottlenecks in the process.
Used well, this transforms passive data into active project management, but always keep sensitive data off platforms that don’t offer proper security.
Idea development and critique—with a caveat
One of AI’s most helpful roles is in brainstorming. You can prompt AI to critique a hypothesis, suggest alternative research designs, or highlight methodological gaps. It feels like having a 24/7 research assistant: one that doesn’t mind reviewing your idea for the tenth time.
But here’s the caveat: AI is becoming one massive echo chamber. Trained on existing literature and online content, it tends to reinforce mainstream narratives and avoid riskier, less explored ideas. It doesn’t “think” in the way humans do; it reflects back what it has seen before. That makes AI a useful conversation starter, but not a source of academic originality.
Use it to sharpen your ideas, to spot flaws in your argument, but not to generate them wholesale.
Pattern recognition and anomaly detection in research
In fields like physics, biology, and engineering, research often involves detecting subtle patterns within large datasets. AI excels at this. Machine learning algorithms can detect anomalies in medical imaging, predict behaviour in materials science, identify and forecast trends in structural health monitoring data, or identify gene sequences related to diseases.
For PhD students or researchers working with complex datasets, this means uncovering trends that might take a human weeks or longer to detect manually. Consider this a level-up of our traditional statistical toolbox.
The power of AI lies in scale and speed, but interpretability remains a challenge. Knowing why a model made a certain prediction is often as important as the prediction itself.
Turning assigned reading into quizzes
Teaching STEM often includes dense reading materials that can be hard for students to absorb. AI can help here by generating multiple-choice quizzes or flashcards from scientific articles, helping students revise key points.
For instructors, this solution can be a time-saver. Instead of writing all the questions manually, you can have AI generate a first draft and then tailor the questions for relevance and clarity.
Always verify the accuracy of AI-generated content. It can occasionally misrepresent or misinterpret complex material.
Caveats: Data privacy, legal risks, and echo chambers
While AI offers benefits, we must approach it critically, especially in academia, as we do not know yet the outcomes of AI-assisted learning in our students, and there are several examples of unethical AI usage in research. Here are important caveats to consider:
- Data security is not guaranteed: Most AI tools, especially free ones, are not built with academic confidentiality in mind. Feeding a manuscript draft, unpublished dataset, or student information into a generative AI model may expose sensitive content to third-party servers. Especially, personal data should be a big red no. Even “anonymous” usage can be stored and repurposed for model training. This may conflict with university policies or data protection laws. It’s important to assume that once information is entered into an AI tool, it may no longer be fully private.
 - Deleted chats are not really deleted: AI platforms store your interactions, even if you delete them. These records can be retained for legal purposes, quality assurance, or model training. This practice poses ethical and legal concerns, particularly when discussing confidential information. Before using AI in your research or teaching, check the platform’s privacy policy and confirm whether enterprise or education versions with stronger protections are available.
 - Intellectual stagnation: beware the echo chamber: AI tools are only as good as their training data. That training data comes from the internet (including everything on Reddit) with all the biases that are ingrained in there. Moreover, if you use generative AI tools for a while, they tend to adapt themselves to our thinking processes and mimic us, rather than be critical of our patterns. So, encourage students and colleagues to question AI outputs, trace them back to their sources, think independently, and spot the biases and confirmations of our thinking.
 
Conclusion
Used well, AI can help STEM professionals work more efficiently, develop new teaching tools, and handle administration in a more efficient way. Like any tool, it must be handled with care.
It may be tempting to hand over our mental heavy lifting to AI. In doing so, we may lose some of the very things that make academic work meaningful: curiosity, creativity, and critical analysis.
We need to teach students not just how to use AI, but how to use it wisely. This includes understanding its limitations, protecting their data, and staying vigilant against its potential to reinforce biases or discourage deeper inquiry.
In STEM, where so much depends on precision, ethics, and innovation, we must remain cautious and intentional. When used responsibly, AI can become a powerful partner in our work and accelerate discoveries. But we must lead, not follow. The future of STEM demands it.





Leave a Reply