Research Paper on the Ethical Usage of AI.
The Ethical Implications of Artificial Intelligence
by
Akeem Fearon
An assignment submitted in partial fulfilment of the requirements
for the course CPTR490: Advanced Project
Instructor: Mr. John Williams
Date: April 25, 2025
Department of Computer and Information Sciences
Northern Caribbean University
Abstract
The aim of this project is to delve deep into the world of Ethical AI usage, especially within the Jamaican society within the 21st century. We live in a society, a modernized world where there is no need to utilize or brain functions, simple answers are just a reach away. Whether it be a smart phone, a watch, a PC or even a smart oven. Each and every one of us are driven by an urge to rely on artificial intelligence. Albeit, it is a lot easier to get task done, however the dangers of over reliance on these types of technology poses a severe risk. My objectives will be to assess the situation and do a deep dive to understand how deep this issue runs and what we can do to alleviate the associated risks. With the help of interviews, surveys and thoughtful assessments, I plan to uncover the public’s opinion about AI and its ethical impacts and provide a descriptive analysis of such. The outcome of this research will no doubt shed a light on the how much important is placed upon the technological landscape of AI.
Acknowledgement
This research piece could not have been completed without the help of many individuals. I wish to show appreciation to all the participants who took precious time out of their day to assist. A special word of thanks also to my lecturer, Mr Williams, who’s understanding and patience is the firmest I have ever come to known. Finally, a special thanks goes out to that one special friend who went through each page and helped me every step of the way.
I couldn’t have done it without you all.
Contents
Contents4
1. Introduction1
1.1 The Importance of Ethical AI:3
1.2 Research Objectives:6
1.3 Research Questions:8
1.4 Scope of the Study:9
2. Literature Review11
2.1. Theoretical Perspectives on Ethical AI12
2.2. AI in Education13
2.3 AI in the Medical Profession:15
2.4 AI in the Justice System: Ethical Implications and Concerns19
2.5 Data Security and Surveillance23
2.6 AI Failures and Ethical Concerns25
3. Research Methodology34
3.1 Research Design35
3.2 Data Collection Methods36
3.3 Sample Selection38
3.4 Validity and Reliability39
3.5 Ethical Considerations41
4. Data Analysis and Findings:42
4.1 Survey results:43
4.2 Assessment Results:58
4.3 Interview Findings:83
5 Discussion88
5.1 Further Interpretation of Findings:88
5.2 Implications for Various Industries:89
5.3 Challenges and Limitations:90
6. Conclusion91
6.1 Summary of Key Findings91
6.2 Recommendations93
6.3 Future Research95
Appendix:96
References:103
1. Introduction
The ethical implications of artificial intelligence (AI) remain a largely untapped area of research, offering a wealth of opportunities for further exploration and refinement as technology continues to evolve. AI has undeniably become an integral part of modern life, shaping industries, automating complex processes, and enhancing human capabilities in unprecedented ways. As aforementioned, society has grown increasingly reliant on AI, with intelligent systems now capable of performing some of the most delicate and high-stakes tasks—ranging from robotic-assisted surgeries to the precise landing of space rockets using sophisticated algorithms.
Beyond these advancements, AI is also making significant contributions in law enforcement. One notable example is its integration within the Metropolitan Police in the UK, in collaboration with the Alan Turing Institute. This partnership has demonstrated the potential of AI to assist law enforcement agencies in deterring and preventing crime. By leveraging facial recognition technology and voice identification algorithms, police forces can enhance their investigative capabilities and improve security measures. However, while these applications highlight the positive aspects of AI, they also raise serious ethical concerns—particularly regarding issues of bias, privacy, and the potential for misuse. Studies have shown that AI-driven systems can sometimes yield biased results due to flawed training data, reinforcing existing societal inequalities rather than mitigating them.
This raises an important question: Why is this research necessary? AI ethics is a topic that has been extensively explored in first-world nations, where technology is deeply embedded in governance, industry, and daily life. However, there is a significant gap in research and discussion when it comes to AI's implications in developing regions. It is crucial that individuals, particularly in these emerging economies, understand the risks associated with an over-reliance on AI. While AI offers incredible convenience and efficiency, unchecked dependence on automated systems could pose significant risks—ranging from misinformation to the erosion of critical thinking skills.
This study seeks to emphasize the importance of responsible AI usage, particularly among younger generations. Students, under immense academic pressure, may increasingly turn to AI-powered tools to complete assignments, risking their intellectual growth and future career prospects. While AI can serve as a valuable educational aid, an overdependence on it could weaken problem-solving skills, creativity, and independent thinking—essential attributes in a rapidly evolving digital economy. By shedding light on the ethical risks of AI, this research aims to foster awareness and encourage responsible adoption of technology. Moreover, it calls upon software developers, researchers, and policymakers to implement ethical safeguards that ensure AI systems operate fairly, transparently, and without causing unintended harm.
Ultimately, this study aims to bridge the knowledge gap surrounding AI ethics, particularly in developing regions, by fostering a more informed and conscientious approach to AI integration. Through this research, the goal is to contribute to a broader conversation about the future of AI and the responsibility of humanity in shaping it ethically and equitably.
1.1 The Importance of Ethical AI:
As Artificial Intelligence continues to grow and integrate into various aspects of modern-day society, the ethical implications it cases becomes ever increasingly difficult. AI has its grips on key sectors such as healthcare, education, law enforcement and finance, offering accountability and security in these fields.
Ethical AI ensures that AI operated and driven systems operate in a manner that aligns with human values, mitigate harm and promises inclusivity and transparency. Without an ethical oversight, AI has the potential to reinforce biases, invade private and the possible of causes heavy intentional/unintentional harm on society as whole.
Mitigating Bias and Ensuring Fairness:
AI technology are only as good as the data they are trained upon. If an AI model is trained on biased or incomplete, corrupted data it can produce discriminatory outcomes, reinforcing social inequalities (Buolamwini & Gebru, 2018). These concerns have been observed in facial recognition technologies where studies have displayed that these systems misidentify individuals with darker skin tones at a higher rate compared to lighter skinned individuals. These occurrences can have serious repercussions, especially in criminal justice, hiring processes and loan approvals.
To address the aforementioned concerns, AI development must prioritize fairness-aware algorithms and diverse training datasets. Researchers and developers must ensure that greater emphasis is placed on the important of algorithmic transparency to ensure that AI decision making processes are understandable and accountable (Doshi-Velez & Kim, 2017). Without these safe guards in place, AI risks exacerbating existing biases, leading to discrimination and loss of trust in AI driven systems.
Protecting Privacy and Data Security
AI-powered systems rely heavily on a vast amount of personal data, which raises concerns about privacy violations, data breaches and unauthorized surveillance. Cooperation and governments that deploys independent AI driven algorithms must adopt ethical AI procedures by enforcing and abiding to strict data protection regulations. Studies postulate that AI can be misused for mass surveillance, which poses risk to individual privacy rights (Zuboff, 2019).
The General Data Protection Regulation (GDPR) in Europe serves as a beacon for ethical governance in AI, placing heavy emphasis on data minimization, informed consent and user control over personal data (Voigt & Von dem Bussche, 2017). Ethical practices must in-cooperate privacy preserving techniques such as differential privacy and federated learning to ensure that these services do not compromise user data (Dwork & Roth, 2014).
Ensuring Accountability and Transparency
The core challenge that many in AI field might face is the “black box” paradox. This is where AI models makes decisions without clear reasoning or explanations. This lack of transparency can lead to a loss of trust, especially in high stakes environment such as health care and finance (Lipton, 2018). For example, if an AI model denies a patient lifesaving treatment or a loan applicant is rejected without a clear reason, it causes ethical concerns to be raised.
To enhance transparency and accountability, it was argued by scholars for the implementation of Explainable AI 9XAI, which provides insights on how AI frameworks should also mandate human oversight into the decision-making process, particularly in critical applications where AI errors can have severe consequences (Brynjolfsson & McAfee, 2017).
Promoting Human-Centered AI Development
The end goal of Ethical AI is to ensure that AI servers humanity in accordance of our rules and regulations. AI must be developed with a human-centered approach, prioritizing societal well-being over cooperate and/or governmental interests (Russell, 2019). Ethical guidelines, such as the ones proposed by the IEEE and UNESCO, advocates for the values such as fairness, transparency, and human autonomy in AI systems (IEEE, 2019; UNESCO, 2021).
Additionally, there is growing concern regarding the over-reliance of AI education, especially among students who use these tools or academic work. While it is proven that AI can enhance the learning experience, excessive dependence on it can hinder critical thinking and problem-solving skills (Selwyn, 2019).
The ethical development and deployment are crucial for ensuring that AI enhanced technologies does not cause harm or social inequalities. By directly targeting bias, protecting privacy, ensuring accountability, and promoting human centered development, society can fully utilize the power of AI while reducing its risks. Future research must continue exploring ethical frameworks to guide policy makers, developers and users toward a responsible AI adoption.
1.2 Research Objectives
The ethical implications of AI have been a pressing matter of concern as AI systems begun to increasingly influence decision making across various sector including healthcare, education, finance, and law enforcement. This research aims to critically analyse these concerns and asses their potential risks and benefits.
Specifically, this project seeks to:
Examine the impact AI in education, especially in student learning and academic integrity. With AI tools becoming more accessible and even more powerful concerns regarding their overuse in education have risen (Selwyn, 2019). This objective will explore whether AI enhances learning or fosters academic dishonesty.
Analyse the role of AI in the medical profession and its ethical concerns. While AI has improved diagnosis and treatment efficiency, ethical concerns surrounding accountability, decision transparency and patient privacy remains (Topol, 2019).
Investigate the potential over-reliance on AI in various professions. AI is designed to help human decision making but its unmonitored implementation could lead to an erosion of human expertise, especially in fields especially in fields requiring critical thinking and ethical judgment (Brynjolfsson & McAfee, 2017).
Assess AI’s impact on data security and privacy. AI systems can process vast amount of personal data, which brings about concerns about surveillance, consent and security vulnerabilities (Zuboff, 2019).
By addressing these objectives, this study aims to contribute to the growing body of literature on AI ethics and provide recommendations for responsible AI development and implementation.
1.3 Research Questions
To achieve the objectives outlined above, this research will address the following key questions:
To what extent does AI impact student learning and academic integrity?
How does the growing reliance on AI affect professional fields that traditionally require human expertise?
What ethical concerns emerge from the integration of AI in the healthcare sector?
What documented cases of AI shortcomings and failures exist, and what lessons can be drawn from them?
What ethical responsibilities do AI developers and policymakers bear in ensuring the creation of fair and unbiased AI systems?
1.4 Scope of the Study
This study will provide a comprehensive examination of the ethical challenges associated with the adoption of AI across various sectors, with a particular emphasis on the following areas:
AI in Education:
This section will analyze the impact of AI-powered learning tools on student behavior, engagement, and academic integrity. It will explore the potential for AI to enhance or hinder critical thinking skills and its broader implications for the educational landscape (Selwyn, 2019).
AI and Data Privacy:
Focusing on AI's reliance on large datasets, this study will investigate the ethical concerns surrounding user privacy, data breaches, and the risks posed by algorithmic surveillance (Zuboff, 2019). The role of data governance and privacy regulations will also be examined.
Case Studies of AI Failures:
This section will review documented instances where AI systems led to significant ethical or social issues, such as biased hiring algorithms, wrongful arrests due to facial recognition errors, and other notable AI shortcomings (Buolamwini & Gebru, 2018). The aim is to draw lessons from these failures to inform future AI development practices.
Theories of Ethical AI:
This portion will explore established ethical frameworks and theories applied to AI, such as utilitarianism, deontology, and virtue ethics, and their relevance in AI development. It will also examine the evolving nature of ethical AI standards and the importance of incorporating human oversight in AI decision-making.
AI in the Court System:
Focusing on the use of AI in legal decision-making, this section will examine AI's role in predictive policing, risk assessment tools, and sentencing decisions. It will also analyze ethical concerns such as algorithmic bias, fairness, transparency, and accountability, particularly in relation to systems like COMPAS and predictive policing algorithms. The implications of AI-driven decisions on human rights and due process will also be discussed.
2. Literature Review
With the dawn of the 21st century, Artificial Intelligence (AI) has become increasingly integrated into everyday life, particularly in the field of education. Advances in machine learning have enabled AI-driven systems to deliver personalized learning experiences, proving especially valuable in supporting students with diverse learning needs. According to Claned, an online learning platform, it is projected that within the next three years, nearly 47% of learning systems will be powered by AI technologies.
Beyond education, AI continues to transform various sectors, including healthcare, finance, and law enforcement. While the benefits of AI are undeniable—ranging from improved efficiency to enhanced decision-making—its ethical implications have sparked growing concern among scholars, policymakers, and technology developers. This literature review explores the evolving ethical landscape of AI, examining foundational theories of ethical AI, its influence on education and healthcare, challenges related to data security and privacy, and prominent case studies of AI failures that highlight the need for responsible development and oversight.
2.1. Theoretical Perspectives on Ethical AI
The ethical development and implementation of AI driven software has always been widely debated, leading to the establishment of several theoretical frameworks that guides AI ethics. One of the foundational theories in this field is deontological ethics, which suggests that AI systems must always adhere to very strict rules and principles, ensuring fairness and complete transparency in their decision-making process (Floridi & Cowls, 2019). Similarly, utilitarian perspectives emphasize maximizing overall societal benefits while minimizing harm caused by AI applications (Bostrom, 2014).
A significant contribution to AI ethics is Asimov’s Three Laws of Robotics, which, while fictional, have influenced real world discussions on designing specific AI systems that prioritize human safety (Asimov, 1950). In recent years, researchers have concentrated on algorithmic fairness, which aims to ensure equitable results for various demographic groups in order to eradicate biases in AI decision-making (Dwork et al., 2012). Furthermore, AI should be trained to conform to human ethical standards and ideals, according to Value Alignment Theory (Russell, 2019).
Ethical AI frameworks have been proposed by international organisations like the IEEE and UNESCO, which highlight values like accountability, transparency, and human-centered AI behavior (IEEE, 2019; UNESCO, 2021). These theories guide developers and policymakers in the creation of responsible AI systems, and they form the basis for ethical AI governance.
2.2. AI in Education
The use of AI in education has revolutionised conventional classroom settings by automating administrative duties and delivering individualised learning experiences. But its moral ramifications have spurred discussions about data privacy, student reliance, and academic honesty.
The over-reliance on AI tools, like ChatGPT and automated essay graders, is a significant worry in AI-powered education as it may impair students' ability to think critically and solve problems (Selwyn, 2019). Research has demonstrated that although AI-generated replies are effective, they frequently fall short of human thinking in depth, resulting in learning experiences that are shallow (Holmes et al., 2021). Additionally, false positives and the monitoring of student work are ethical issues raised by AI-driven plagiarism detection programs (McArthur, 2021).
Bias in AI-driven learning resources is another serious problem. According to research, biassed training data may cause AI-based learning systems to favour some groups over others, hence escalating educational disparities (Luckin, 2017). One of the biggest obstacles to the ethical use of AI in education is making sure that technology complements human teachers rather than takes their place.
By facilitating individualised learning and enhancing accessibility, artificial intelligence (AI) holds the potential to completely transform education. But it also presents moral dilemmas:
- Data privacy: AI systems in education frequently gather and examine enormous volumes of student data, which raises questions around data security and permission.
- Bias in Algorithms: If AI-driven tools, like grading systems, are trained on non-representative datasets, they may reinforce preconceived notions.
- Equity Concerns: Since not all students have equal access to technology, an over-reliance on AI could make inequality worse.
- Case Study: Students of colour were disproportionately identified by Proctorio, an AI-based proctoring technology, due to its intrusive surveillance methods and biased algorithms.
2.3 AI in the Medical Profession
Artificial intelligence (AI) has revolutionized the healthcare sector by supporting medical research, treatment planning, and diagnosis. AI systems, particularly in identifying conditions like cancer and diabetic retinopathy, have demonstrated significant accuracy (Topol, 2019). However, alongside these advancements, critical ethical issues persist—chiefly regarding algorithmic biases, patient privacy, and accountability.
Key Ethical Concerns in AI-Driven Healthcare:
The "Black Box" Problem: One of the most prominent ethical challenges is the "black box" nature of AI systems. These systems make decisions—such as medical diagnoses—without providing transparent justifications, making it difficult for medical practitioners to verify or challenge the AI’s recommendations (Lipton, 2018). This opacity can undermine trust in AI and raises questions about accountability in medical practice.
Medical Liability and Accountability:
The lack of transparency in AI decision-making also complicates issues of medical liability. When an AI system makes an incorrect diagnosis, it becomes challenging to determine legal responsibility. Should the AI be held accountable, or should the medical professionals who rely on it bear the burden of responsibility (McDougall, 2019)?
Privacy and Data Security:
AI in healthcare depends on vast amounts of patient data for accurate predictions and diagnoses. This dependence raises concerns about privacy and the potential for data breaches. While AI can enhance predictive analytics and improve healthcare outcomes, researchers warn that the increased risk of unauthorized access to sensitive patient data could undermine trust in these systems (Leslie, 2019).
Moral Dilemmas in AI Utilization
Despite the potential of AI in healthcare, several moral dilemmas remain unresolved:
Diagnostic Accuracy: AI systems like IBM Watson for Oncology have faced criticism for making incorrect treatment recommendations, which could endanger patients' lives. Such errors highlight the risks associated with overreliance on AI in clinical decision-making.
Bias in Healthcare Algorithms: Studies have shown that biased training data can lead to AI systems underdiagnosing diseases in minority groups. For instance, a study by Obermeyer et al. (2019) published in Science revealed that an algorithm used in the U.S. healthcare system systematically underestimated the health needs of black patients compared to white patients, due to its reliance on historical healthcare spending data rather than actual medical conditions. This bias exacerbates existing healthcare disparities and highlights the urgent need for more inclusive and representative datasets in AI development.
Patient Autonomy and Consent:
The integration of AI in healthcare heavily relies on vast amounts of patient data to train algorithms for diagnosis, treatment planning, and predictive analytics. This dependence raises significant ethical concerns regarding data ownership and informed consent. Imperative to the debate is the question: How much control should patients have over their medical data, especially when AI systems influence critical healthcare decisions?
Patients often have limited understanding of how their data is used once it enters an AI system. Traditional models of consent may not adequately cover the scope of AI-driven data processing, especially when data is reused or shared across institutions. The lack of transparency can lead to a breach of patient autonomy and trust.
A study by Mikk et al. (2017) titled "Personalized Medicine: The Data Ownership Question" highlights that patient are increasingly demanding more control over their medical information. The research found that individuals want clear, granular consent mechanisms that allow them to decide how, when, and by whom their data is accessed, especially when it feeds into opaque AI systems. Without robust data governance frameworks, the risk of data misuse, re-identification, and breaches increases.
This issue emphasizes the need for ethical AI development that includes:
Transparent data practices, explaining to patients how their information will be used.
Dynamic consent models, allowing individuals to update or withdraw consent as technology evolves.
Data stewardship policies, where institutions act in the best interest of patients while ensuring data is protected and ethically used.
Ultimately, ethical AI in healthcare must uphold patient rights, prioritize informed consent, and foster trust in the technologies that increasingly shape clinical decision-making.
Balancing Innovation with Ethics
As AI continues to shape the healthcare landscape, the development of ethical AI must balance technological advancements with patient rights, privacy, and trust. It is essential to address biases, ensure transparency, and safeguard patient data to foster an AI-driven healthcare system that prioritizes both innovation and ethical considerations.
2.4 AI in the Justice System: Ethical Implications and Concerns
Artificial intelligence (AI) has made significant strides in the justice system, impacting judicial decision-making, predictive policing, risk assessments, and case management. While AI holds the promise of improving efficiency, reducing human bias, and analyzing vast amounts of legal data, it also raises ethical concerns related to fairness, transparency, and accountability. These concerns highlight the need for careful oversight and regulation as AI systems become increasingly integrated into the legal landscape.
AI and Predictive Policing
Predictive policing utilizes AI algorithms to analyze historical crime data and forecast where future crimes may occur. The goal is to optimize the allocation of law enforcement resources and prevent crime. However, predictive policing has drawn criticism due to potential algorithmic bias. Lum and Isaac (2016) found that these AI systems disproportionately target minority communities because they rely on historical crime data, which often reflects racial biases in law enforcement practices. This creates a feedback loop, where certain neighbourhoods are over-policed based on past crime reports, rather than actual crime trends.
Richardson et al. (2019) further argued that predictive policing exacerbates systemic inequalities, as flawed datasets perpetuate biases that are rooted in human behaviors. To address these concerns, transparency and oversight are essential in ensuring that AI systems do not reinforce racial and socioeconomic disparities within policing.
AI in Judicial Decision-Making and Risk Assessment
AI tools are increasingly being used to assist judges in making sentencing and bail decisions through risk assessment algorithms. One such system, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), assesses the likelihood of a defendant committing future crimes based on various risk factors. However, a study by Angwin et al. (2016) from ProPublica revealed that COMPAS exhibited racial bias, often classifying black defendants as higher-risk than their white counterparts. This finding raises critical ethical concerns about fairness and accountability in AI-driven legal decisions.
Kehl et al. (2017) advocate for greater transparency in the development and application of these systems, suggesting that algorithms used in the legal system must be interpretable to ensure that they do not perpetuate discrimination. The ongoing debate about whether AI should play a significant role in judicial decision-making hinges on concerns about bias, fairness, and the lack of human discretion in crucial legal decisions.
AI in Legal Research and Case Management
AI-powered legal research tools, such as LexisNexis and ROSS Intelligence, have revolutionized the way lawyers and judges analyse case law and statutes. These AI systems can process thousands of legal documents, identify relevant precedents, and assist in legal drafting (Surden, 2019). While this has improved efficiency, critics argue that over-reliance on AI in legal research could result in a decline in critical legal reasoning and analytical skills among legal practitioners (Ashley, 2017).
Data privacy is another significant concern in the use of AI in legal case management. Court systems that adopt AI must ensure that sensitive legal documents remain secure and that AI systems do not inadvertently expose confidential information (Wischmeyer, 2020). Striking the right balance between efficiency and ethical considerations is essential for the responsible integration of AI in legal processes.
AI and Bias in Sentencing and Parole Decisions
AI is also used in parole boards and correctional facilities to assess inmate behaviour and predict the likelihood of rehabilitation. However, scholars caution that these AI models may fail to fully account for social and economic factors that influence recidivism, leading to inaccurate predictions about an inmate's potential for rehabilitation (Eaglin, 2017). The lack of transparency in sentencing algorithms further complicates ethical concerns, as defendants may not understand how AI influences their legal outcomes (Citron, 2019).
To address these issues, researchers advocate for the development of Explainable AI (XAI) in the justice system. XAI can provide greater judicial oversight and public accountability by ensuring that AI models are transparent and their decisions can be easily understood and contested (Doshi-Velez & Kim, 2017). Additionally, clear policies and guidelines are needed to ensure that AI tools used in the justice system align with core principles of justice, human rights, and fairness.
2.5 Data Security and Surveillance
The increasing reliance on AI in various sectors has amplified concerns over data security and privacy. AI systems process and analyse massive datasets, often containing sensitive personal information, making them prime targets for cyberattacks. One major concern is the misuse of AI for mass surveillance, where governments and corporations deploy AI-powered facial recognition and tracking systems, raising ethical questions about individual freedoms (Zuboff, 2019). Studies show that authoritarian governments have leveraged AI to monitor citizens, suppress dissent, and manipulate public opinion through automated disinformation campaigns (Feldstein, 2019).
Additionally, customer privacy is at stake from AI-driven data collecting. Concerns regarding informed consent and data exploitation are raised by the use of AI by businesses like Google, Facebook, and Amazon to examine user behaviour and preferences (Andrejevic, 2020). Stricter data governance laws, like the General Data Protection Regulation (GDPR) in Europe, which requires increased transparency and user control over personal data, are essential, according to ethical AI frameworks (Voigt & Von dem Bussche, 2017).
The increasing use of AI across various sectors has led to growing concerns about data privacy. In 2020, a data breach involving AI-powered facial recognition software exposed the personal data of millions of individuals, including images of people who had not consented to having their faces scanned (Geist, 2020). This case exemplifies the risks of AI technologies that rely on large datasets, often obtained without explicit consent from individuals. The breach raised serious ethical questions about data ownership, consent, and privacy rights in the age of AI.
As AI becomes more integrated into everyday life, the ethical implications of data collection and usage must be addressed. The growing reliance on AI for surveillance, customer profiling, and targeted advertising further exacerbates concerns about how personal data is used and protected. Ensuring that AI systems are designed to respect privacy rights and comply with data protection laws is crucial for mitigating ethical risks.
2.6 AI Failures and Ethical Concerns
Despite significant advancements in AI, numerous cases have highlighted the ethical challenges that arise when AI systems fail. These failures often stem from issues such as biased algorithms, lack of transparency, inadequate oversight, and the unintended consequences of AI deployment. Below are some notable cases of AI failures, which raise significant ethical concerns:
Biased Hiring Algorithms
In 2018, Amazon faced backlash when it was revealed that its AI-powered hiring tool systematically discriminated against female candidates. The algorithm, designed to help screen resumes for job applicants, was found to favor male applicants due to biased training data. The system was trained on resumes submitted to Amazon over a ten-year period, which were predominantly from male applicants. As a result, the algorithm learned to prefer male-oriented language and experiences, disadvantaging female applicants (Dastin, 2018). This case underscores the ethical risks of algorithmic bias, particularly in employment decisions, and highlights the need for fairness in AI systems. It also demonstrates how AI systems can inadvertently perpetuate existing societal inequalities, even in highly automated decision-making processes.
In recent years, other companies have faced similar challenges. For instance, in 2020, the U.S. National Institute of Standards and Technology (NIST) found that some AI recruitment tools used by major companies had gender and racial biases (Angwin, 2020). These failures have prompted calls for greater regulatory oversight of AI in hiring processes to ensure equity and inclusion.
Facial Recognition Errors in Law Enforcement
Facial recognition technology, widely used by law enforcement agencies for identifying suspects, has come under intense scrutiny due to its higher error rates in identifying people of color. A 2018 study by Buolamwini and Gebru found that facial recognition systems from major companies like IBM, Microsoft, and Amazon misidentified Black and Asian faces at significantly higher rates than white faces. The errors were particularly pronounced in identifying women of color, raising concerns about racial and gender biases embedded in AI algorithms (Buolamwini & Gebru, 2018).
This problem has had severe real-world consequences. In 2019, a Michigan man was wrongfully arrested after a facial recognition system falsely matched his photo to surveillance footage from a theft incident. This case, among others, has sparked a wider debate about the ethical implications of deploying facial recognition in law enforcement, including the potential for wrongful arrests, racial profiling, and violations of privacy. In response, cities like San Francisco and Boston have moved to ban facial recognition use by government agencies, while advocacy groups call for a national moratorium on the technology until ethical standards are established.
Autonomous Vehicle Accidents
Autonomous vehicles, powered by AI, have been involved in multiple fatal accidents, raising serious ethical questions about the safety of self-driving technology and the liability in cases of accidents. One of the most high-profile incidents occurred in 2018, when an Uber self-driving car struck and killed a pedestrian in Tempe, Arizona. The car’s AI system failed to recognize the pedestrian in time to avoid the collision, despite being programmed to detect and react to obstacles in its path (Goodall, 2016). Similarly, Tesla’s autopilot system has been involved in fatal accidents, raising concerns about the limits of AI in making life-and-death decisions.
These incidents highlight the ethical implications of AI decision-making in life-threatening situations. In particular, there is a need for clear guidelines on the responsibility of AI developers and car manufacturers in ensuring the safety of autonomous vehicles. The challenge is further complicated by debates over the ethical implications of programming self-driving cars to make moral decisions, such as prioritizing the lives of passengers over pedestrians, or vice versa. Legal experts argue that current liability frameworks are insufficient to address the complex issues posed by autonomous vehicles, and there is a growing demand for updated laws and regulations that ensure AI technologies do not compromise public safety.
AI in Healthcare: Misdiagnoses and Unintended Consequences
AI’s integration into healthcare has also raised significant ethical concerns, particularly in areas such as diagnosis and treatment planning. AI-powered diagnostic tools, while often accurate, have been found to fail in certain circumstances. For example, a 2020 study highlighted that an AI system used for diagnosing breast cancer showed lower accuracy in detecting tumors in women of Asian descent compared to white women (Vilarinho et al., 2020). This finding demonstrates how AI systems, trained on biased datasets, may produce suboptimal results for certain demographic groups.
A second incident would be IBM’s Watson for Oncology. This was developed with the ambitious goal of revolutionizing cancer treatment by recommending personalized therapies based on vast amounts of medical literature and patient data. Initially launched in partnership with Memorial Sloan Kettering Cancer Center, the system was marketed globally as a cutting-edge AI tool capable of aiding oncologists in clinical decision-making.
However, multiple reports revealed significant shortcomings in Watson’s performance. A 2018 internal IBM report, later covered by Stat News, showed that Watson often recommended unsafe or incorrect cancer treatments that were not supported by clinical evidence. In some cases, its suggestions contradicted expert oncologist opinions, leading to concerns about patient safety and clinical reliability.
The potential for AI to reinforce existing health disparities has led to calls for greater scrutiny of AI systems in medical settings. Researchers argue that AI in healthcare must be designed to account for the diversity of patient populations, ensuring that models are trained on representative datasets to avoid perpetuating biases. Furthermore, the “black box” nature of AI decision-making in healthcare raises ethical concerns about transparency and accountability when AI makes critical medical decisions without providing clear explanations for its recommendations.
AI in Criminal Justice: COMPAS and Racial Bias
The use of AI in the criminal justice system has also been marred by ethical failures. One of the most well-known cases is the use of the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, which is used to assess the risk of offenders re-offending and inform sentencing and parole decisions. A 2016 study by ProPublica found that COMPAS exhibited significant racial bias, often predicting that Black defendants were at higher risk of re-offending, while white defendants were underestimated in their risk of recidivism (Angwin et al., 2016).
This case highlights the risks of using AI in high-stakes decisions that affect individuals’ lives. It raises critical questions about fairness, transparency, and accountability in AI-driven legal decisions. Critics argue that algorithms like COMPAS should be made more transparent and subject to rigorous auditing to prevent bias from influencing legal outcomes. There is also growing pressure for policymakers to establish clear guidelines for the ethical use of AI in the criminal justice system, ensuring that AI tools are used in ways that support, rather than undermine, the principles of justice and equality.
AI Failures in Academics:
Turnitin is one of the most widely used AI-based plagiarism detection tools in schools and universities worldwide. It compares student submissions to an extensive database of academic content, websites, and other student papers to identify potential plagiarism.
This leads to a multitude of concerns:
Data Ownership and agreement: Frequently without express informed agreement, Turnitin keeps student papers in their exclusive database for an indeterminate period of time.
Algorithmic errors and false positives: The algorithm may mark passages that are correctly referenced or often used as plagiarism, which can unjustly penalise pupils and cause stress, particularly for non-native English speakers.
Privacy Issues: The gathering and long-term retention of student information raises questions about its use, sharing, and security.
Critics like Barrett (2019) argue that such systems create a culture of suspicion rather than trust, transforming classrooms into spaces of surveillance and punishment instead of learning and growth.
Edgenuity is an AI-driven online learning platform that gained popularity during the COVID-19 pandemic. It uses algorithms to grade short-answer and essay responses based on keywords and pattern recognition.
Ethical Concerns:
Surface-Level Evaluation: Students learnt that they could "confuse" the system by using keywords to fill in their responses, regardless of the coherence or logic of the sentences, and still get good grades.
Equity Issues: Students with distinct writing styles or learning disabilities may be at a disadvantage when using automated grading systems. For instance, if neurodivergent students' answers don't match the algorithm's expectations, they can be unjustly rated down.
Disempowerment of Teachers: An excessive dependence on automated grading systems has reduced teacher supervision, which has limited critical pedagogical engagement and individualised feedback.
A 2020 investigation by The Verge reported that students across U.S. districts exploited Edgenuity’s keyword-based grading by inputting large blocks of unrelated but relevant terms to achieve passing marks, revealing the tool's lack of nuanced understanding.
3. Research Methodology
This chapter outlines the research methods employed to collect and analyze data regarding the public's perception of ethical AI usage. The study adopts a mixed-methods approach, incorporating surveys, interviews, and experiments to gain insights into how individuals understand, interact with, and evaluate the ethical implications of artificial intelligence. By utilizing multiple data collection techniques, this study aims to capture diverse perspectives and provide a comprehensive analysis of the ethical concerns surrounding AI ethics.
Given the widespread application of AI across various sectors, including education, healthcare, law enforcement, and daily life, understanding the public’s perception of its ethical implications, potential risks, and benefits is essential. This chapter also details the research design, followed by an explanation of the data collection methods, sample selection process, and ethical considerations.
3.1 Research Design
The research employs both qualitative and quantitative methods to ensure a comprehensive understanding of the ethical challenges related to AI. This mixed-methods approach is designed to provide both in-depth insights and broader, generalizable findings:
Qualitative Methods: These methods explore individuals' experiences, meanings, and perceptions related to AI. In-depth interviews are conducted to uncover subjective views on AI ethics, allowing participants to share personal opinions and explore the nuances of ethical concerns in AI adoption.
Quantitative Methods: A large-scale survey and assessment/experiment was distributed to gather numerical data on public opinions regarding AI’s ethical implications. By analyzing patterns, correlations, and general trends, quantitative research allows for broad conclusions that can be generalized to a larger population.
3.2 Data Collection Methods
The research utilizes a combination of surveys, interviews, and experiments to gather diverse perspectives:
Surveys: Online surveys were distributed to a group of 46 (forty-six) individuals to collect data from a wide demographic, including both AI professionals and general users. The surveys include both closed-ended questions (for quantifiable data) and open-ended questions (for qualitative responses), allowing the study to measure public opinion and explore the reasons behind specific ethical concerns.
Interviews: Semi-structured interviews were conducted with a selected group of participants who have relevant expertise or experience with AI. These participants mostly included individuals from sectors impacted by AI such as software developers and AI developers. Interviews allow for deeper exploration of personal experiences, beliefs, and attitudes toward AI ethics.
Experiments: The research included controlled experiment where participants twenty-two (22) participants were presented with AI-related ethical dilemmas. The goal is to observe decision-making patterns in real-world AI scenarios and assess how individuals balance ethical considerations with practical concerns. The assessment is as follows:
The examination consisted of a 20-question quiz designed to assess general knowledge and logical reasoning skills.
Participants were encouraged to use AI tools to assist them in completing the quiz.
Additionally, they were tasked with identifying whether specific pieces of media were human-generated or AI-generated.
Participants were also invited to share their thoughts on the ethical implications of AI surveillance in the workplace.
Lastly, they were asked to rate their level of confidence (on a scale of 1 to 5) regarding privacy concerns and ethical AI surveillance.
3.3 Sample Selection
To ensure a representative sample, participants for the survey are randomly selected from diverse demographic groups, including different ages, educational backgrounds, and occupations. For the interviews, purposive sampling is employed to select individuals with specific expertise or direct experience with AI. Experiment participants are chosen to represent a broad range of perspectives, ensuring the inclusion of those from sectors directly affected by AI, such as healthcare and law enforcement.
3.4 Validity and Reliability
To maintain the accuracy and consistency of the research findings, the study emphasizes both validity and reliability:
Validity refers to the accuracy and relevance of the research findings. The methodology is designed to ensure that the research objectives are met, and that data collection methods are appropriate for the research questions. Efforts are made to minimize biases—such as selection or response biases—that could affect the results. For example, if the research were to only survey AI professionals and exclude general users, the findings would likely be biased toward expert perspectives and not accurately represent broader societal views.
Reliability refers to the consistency and reproducibility of the research. To achieve this, the study standardizes the data collection process, ensuring that other researchers can replicate the study under similar conditions. The use of validated instruments (tested surveys and interview protocols) ensures consistency across data collection efforts. Additionally, clear and consistent participant selection methods help avoid variability that could otherwise affect the reliability of results. For example, if the study were repeated with a different group of participants and produced vastly different results due to unclear methodology, it would be considered unreliable.
By emphasizing both validity and reliability, this study aims to ensure that the research findings are robust and trustworthy, providing meaningful insights into the ethical concerns surrounding AI.
This mixed-methods approach, combining qualitative and quantitative techniques, provides a well-rounded analysis of public perception regarding AI ethics. By ensuring the reliability and validity of the research, the study aims to contribute valuable insights that are both accurate and applicable to a wide audience.
3.5 Ethical Considerations
Maintaining ethical integrity in research is crucial, especially when discussing delicate subjects like privacy issues, surveillance, and AI ethics. This study's ethical principles are described in this section, with a focus on participant anonymity, informed permission, potential dangers, and adherence to ethical standards including the APA Ethical Principles and the Belmont Report.
Ethical Guidelines Followed:
This study complies with globally accepted ethical standards, guaranteeing that each participant is treated with dignity, equity, and honesty. The main moral rules adhered to are as follows:
Respect for Autonomy: Participants were allowed to voluntarily join or leave at any time without incurring any fees.
Non-Maleficence (Do No Harm): Procedures were put in place to guarantee that involvement would not cause any emotional, psychological, or professional harm
Beneficence (Maximising Benefits): The study was planned to minimise participant risks while offering valuable insights into the ethical application of AI.
Justice (Equal Participant Selection): Non-discriminatory participant selection guaranteed equal participation for all demographic groups.
By placing a high priority on participant welfare and research integrity, these guidelines guarantee that the study adheres to the highest ethical standards.
4. Data Analysis and Findings
This section will give a direct break down of the survey responses, interview sessions and experimental assessment feedback provided by all participants as well as the response rates.
4.1 Survey results
The surveys distributed indicates a high familiarity with AI, with 60.9% being "very familiar" and 82.2% using AI-based tools. Studies, such as those by Brynjolfsson & McAfee (2014), suggest that as AI technology becomes more integrated into various sectors, user familiarity is likely to increase. This aligns with the finding that the majority of respondents actively use AI tools like chatbots and recommendation systems. This goes to show the level of trust people has in AI – Driven systems as well as display the heavy usage of chat bots by individuals.
Furthermore, responses above indicates that AI-powered tools are widely used, but their application varies massively.
Chatbots (82.5%) → The most used AI tool, likely due to its presence in customer service, virtual assistants, and automated support systems.
AI-powered recommendations (60%) → Common in streaming services (Netflix, Spotify), e-commerce (Amazon), and social media algorithms.
AI in healthcare (25%) → Used for diagnostics, medical imaging, and predictive analytics, showing AI’s increasing role in medicine.
Niche AI Tools (2.5%) → This includes AI art generators and conversational AI.
Risk Management AI (2.5%) → Used in finance, cybersecurity, and fraud detection but remains a niche application among respondents.
These results highly how chatbots have been extremely important in customer service, which in itself reduces costs and improving response times. (Adamopoulou, E., & Moussiades, L. (2020)).
This aligns with trends showing that AI is becoming a necessary part of daily life, whether through personal assistants (Siri or Alexa), recommendation systems (Netflix or Spotify), or customer service chatbots.
The high familiarity and adoption rates suggest that AI is no longer a niche technology but a mainstream tool. We see where chatbots dominate AI usage, reinforcing how automation is reshaping human interactions.
Recommendation systems are widely adopted, influencing user decisions without explicit awareness. Healthcare AI adoption is growing, but it still has trust and regulatory barriers and Niche AI tools remain underutilized, likely due to lack of awareness or specialized applications.
With 91.3% of respondents expressing varying degrees of concern about ethical implications, the below responses reflected a broader concern documented in literature, such as Jobin et al (2019), which emphasizes public anxiety regarding AI's impact on privacy, bias, and transparency. The high concern around data privacy (84.8%) and over-reliance on AI (73.9%) echoes findings from studies focusing on public sentiment about AI ethics.
The survey conducted also revealed that a massive 85% of our respondents show concerns about privacy. This is the most pressing concern currently with regards to Artificial Intelligence. Aside from the risk of data compromise, Zuboff, S. (2019), 73.9% also agreed that over-reliance on AI is also a cause of worry, we can lead to a reduction of critical thinking, Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., ... & Wellman, M. (2019).
Data Privacy (84.8%) → The biggest concern among respondents, reflecting fears about data breaches, surveillance, and misuse of personal information.
Over-reliance on AI (73.9%) → Many fear that automation may lead to human skill degradation, loss of jobs, and blind trust in AI-driven decisions.
Bias in Decision-Making (41.3%) → AI systems can reinforce biases if trained on biased datasets, impacting areas like hiring, policing, and lending.
Lack of Transparency (39.1%) → Respondents are concerned about black-box AI models, where decisions are made without clear explanations.
AI as a Tool for Propaganda (2.2%) → A smaller concern but still relevant in discussions about AI-generated misinformation and deepfakes.
Data privacy is the top concern, supporting discussions on AI regulation (GDPR, data protection laws). Whereas Over-reliance on AI is a major risk, aligning with studies on job automation and skill degradation. Furthermore, bias in AI is a known issue, reinforcing the importance of ethical AI development and fairness audits. Lack of transparency in AI models is also a critical flaw, linking to debates on transparency and accountability.
AI’s role in propaganda is a minor concern but relevant, given deepfake technology and misinformation risks.
Interestingly enough, (15.2%) of the respondents oppose strict guidelines, possibly due to concerns about innovations constraints. Boddington, P. (2017), speculated that and argued that ethical concerns should be flexible yet enforceable which goes in line with the (15.2%) of the respondents who opposed this view. On the opposite end of the spectrum, we have a (78.3%) response rate from our respondents stating that AI must follow strict guidelines. Floridi, L., & Cowls, J. (2019) speculated that the five (5) core principles that must be enforce are beneficence, non-maleficence, autonomy, justice, and explicability. This would ensure that AI is unified under a strict ethical frame work.
Key Takeaways:
Most people support strict AI ethics, reinforcing the idea that regulation is necessary to prevent harm.
A minority fear ethics may hinder innovation, supporting discussions on the balance between oversight and progress.
Some uncertainty suggests a gap in AI ethics education, highlighting the need for public awareness initiatives.
Regarding holding developers responsible, (60.9%) of the respondents agreed or strongly agree that developers should be held accountable for unethical AI behaviour. This aligns with Binns, R. (2018), who also agrees with this sentiment. However, a notable 28.3% remain neutral. We also see where a 11% remained uncertain which could suggest a lack of awareness about ethical AI frameworks.
A small (10.8%) of the respondents do not believe that developers should be held accountable, arguments that AI behaviour is shaped by data, user interactions, and corporate decisions rather than individual developers. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016).
Key Points:
A majority believe AI developers should be accountable, reinforcing the argument for ethical responsibility in software engineering.
A large neutral group suggests uncertainty about where accountability should fall, supporting discussions on corporate vs. individual responsibility.
Case studies highlight real-world AI failures, making this a strong area for legal and ethical debates.
Trust in AI:
Only 19.6% trust AI for critical decision-making while 56.5% do not trust it. This skepticism is supported by Lee & See (2004), which discusses the "automation bias" where people trust automation but warns of over-reliance. The suggestions for increasing trust, such as greater transparency (63%) and independent audits (50%), are consistent with ethical AI frameworks proposed in literature, emphasizing the importance of accountability and the ability to explain its actions.
This result displays mixed opinions on whether people are willing to pay more for AI products that follow ethical guidelines:
Yes (26.1%) → A minority of respondents are willing to pay extra for AI products developed with ethical safeguards, such as fairness, transparency, and privacy protections.
No (50%) → Half of the respondents are unwilling to pay more, likely due to factors like cost concerns, lack of awareness, or skepticism about AI ethics claims.
Maybe (23.9%) → A significant portion remain undecided, possibly indicating a need for clearer benefits and proof of ethical compliance. A lack of awareness is also the fact as Floridi, L., & Cowls, J. (2019) justified that companies investing in ethical AI struggle to convince consumers that higher costs are justified.
Key Takeaways:
A majority (50%) are unwilling to pay more for ethical AI, reinforcing the challenge of monetizing ethical development.
A sizable "Maybe" group (23.9%) suggests that ethical AI adoption depends on better education and transparency.
Ethical AI needs stronger incentives—if regulations or social pressure increase, companies may be forced to integrate ethics regardless of consumer demand.
AI in Specific Domains:
The following response results indicate strong support for stricter AI ethics in healthcare compared to entertainment:
Strongly Agree (43.5%) + Agree (26.1%) = 69.6% → A majority believe AI in healthcare should be more tightly regulated due to high stakes involving human life, medical data privacy, and safety. Topol, E. (2019) Argues that AI in healthcare requires higher ethical scrutiny due to patient safety and the risk of biased medical decisions.
Neutral (17.4%) → Some respondents are unsure, possibly due to a lack of knowledge about current AI regulations in different industries.
Disagree (2.2%) + Strongly Disagree (10.9%) = 13.1% → A small minority believe AI ethics should be consistent across industries, potentially viewing overregulation as a barrier to innovation and efficiency. This coincides with Boddington, P. (2017) as they speculated that striking a balance between ethical concerns and technological progress is crucial.
A divided opinion on whether AI in education should prioritize student privacy over performance is shown above:
Strongly Agree (15.2%) + Agree (17.4%) = 32.6% → About one-third of respondents believe privacy should be prioritized, reflecting concerns about data collection, surveillance, and misuse of student data. We conclude the same here as with Selwyn, N. (2020) who highlights concerns that AI-based learning analytics track students extensively, raising ethical questions about privacy and consent.
Neutral (47.8%) → Almost half of respondents are undecided, possibly due to uncertainty about the trade-offs between personalized learning and privacy risks.
Disagree (8.7%) + Strongly Disagree (10.9%) = 19.6% → A minority believe that performance should take precedence over privacy, likely viewing data-driven AI systems as crucial for improving learning outcomes.
The results reveal a strong consensus in favor of ethical training for AI developers:
A combined 69.5% of participants either agreed or strongly agreed that developers should be trained in ethical decision-making.
Only a small minority (10.6%) disagreed or strongly disagreed.
19.6% of participants remained neutral, suggesting some uncertainty or lack of awareness about the topic.
This findings indicate overwhelming support for ethical training in AI development. Nearly 70% of respondents believe that AI developers should be equipped with ethical decision-making skills, emphasizing public concern about the moral responsibilities of those shaping AI systems. This aligns with recommendations from Mittelstadt et al. (2016) who argue that embedding ethical reasoning into AI development is crucial for mitigating bias, discrimination, and unintended harm.
Balancing Ethics and Innovation
The survey reports a split opinion on whether ethical restrictions hinder innovation, with 43.5% agreeing. This aligns with discussions in the AI ethics literature, such as by Cave & Dignum (2019), which note the tension between regulatory frameworks and innovation. The views on whether AI should prioritize performance or ethics are also particularly relevant as studies recommend a balance, emphasizing that ethical considerations should not compromise performance in critical sectors.
This question reveals a clear inclination toward the importance of ethical regulation, even if it might limit innovation:
A majority (50%) of respondents disagree or strongly disagree with the idea that governments should relax AI ethics regulations for the sake of innovation.
Only 21.7% support the notion (agree or strongly agree).
A significant portion — 28.3% — remains neutral, possibly reflecting uncertainty or ambivalence on how regulations impact innovation.
The results indicate that half of the participants (50%) oppose reducing AI ethics regulations, even if it could foster innovation. This suggests a public preference for responsible development over unchecked technological progress. These findings are consistent with the work of Calo (2015), who emphasizes that innovation should not come at the expense of accountability and ethical safeguards. The data also reflect a growing awareness that strong ethical oversight is necessary to prevent misuse and unintended societal harm.
This graph shows that 54.3% of respondents would not use faster AI systems if they lacked ethical safeguards, while 19.6% would, and 26.1% are uncertain. This is highly suggesting that our sample would place ethical considerations over speed and efficiency when using AI, though a significant minority (19.6%) may be willing to trade ethics for performance. Bryson, J. J. (2019) speculated that we consider ethical risks of prioritizing AI efficiency over safety and accountability, arguing that systems without ethical safeguards can lead to unintended harm.
4.2 Assessment Results
A total of 22 participants were involved in the study through convenience sampling. They were asked to complete a 20-question assessment designed to test logic, critical thinking, and general knowledge. Participants were encouraged to use AI tools for assistance to assess how AI influenced their problem-solving and performance. Additional activities included identifying whether media content was AI-generated or human-made, and completing a short survey regarding AI surveillance and ethical comfort levels.
Interpretation & Discussion:
This question tested general knowledge and was included to gauge participants' reliance on either prior knowledge or AI assistance. The majority of participants (63.6%) correctly identified Ottawa as the capital of Canada, suggesting a reasonable baseline awareness. However, nearly 36.4% selected incorrect options with 18.2% choosing Toronto, likely due to its prominence as a well-known city.
This discrepancy highlights how AI tools can assist in improving accuracy, especially when it comes to verifying commonly misunderstood facts. Further studies could examine whether participants who used AI were more likely to choose the correct answer compared to those who did not.
This question assessed logical reasoning and basic mathematics (Speed = Distance ÷ Time). Nearly all participants (95.5%) correctly selected 2 hours as the time required to travel 120 km at 60 km/h, indicating a strong grasp of simple problem-solving skills or effective use of AI tools. Only one respondent chose the incorrect answer of 3 hours.
This high rate of accuracy may suggest that participants either had prior knowledge of the concept or successfully used AI assistance to verify their calculations. This also highlights how AI tools can support quick, error-free decision-making in technical contexts.
The majority of participants (77.3%) correctly identified helium as a noble gas. However, a combined 22.7% of respondents selected incorrect answers, with 18.2% choosing hydrogen, likely due to confusion between it and helium.
This suggests that although the majority of participants demonstrated a basic comprehension of periodic table categories, a sizeable minority would profit from higher scientific literacy or fact-checking using AI methods.
The outcome also raises an intriguing query: Were inaccurate responses the product of knowledge deficiencies or an over-reliance on unchecked AI? A follow-up could investigate whether individuals who made a mistake relied on memory or AI.
An overwhelming 95.5% of respondents correctly selected 30, indicating strong logical reasoning or successful use of AI tools to analyze the pattern. Only one person chose 32, reflecting a small margin of error.
This result supports the idea that pattern recognition problems are well-suited to AI-supported decision-making, especially when time or pressure is a factor. It also demonstrates that the majority of participants either had a natural aptitude for this type of logic or used external tools effectively.
The correct answer is the invention of the telephone (1876). The other events occurred later:
Wright Brothers' flight – 190
Discovery of penicillin – 1928
Moon landing – 1969
A solid 72.7% of respondents answered correctly, showing either sound historical knowledge or effective AI-assisted verification. However, 27.3% chose incorrect events, with 18.2% mistakenly selecting penicillin, indicating that timelines of major scientific discoveries are still a point of confusion for some.
This result further reinforces the value of AI in bridging knowledge gaps in historical and factual recall, especially when dates and timelines are concerned.
The correct synonym for "benevolent" is kind, and the majority of participants (81.8%) answered correctly. This suggests that most individuals possess a basic understanding of common vocabulary, or they were able to quickly verify the meaning with AI assistance.
However, 18.2% of respondents selected "evil," which may suggest some confusion, as "benevolent" refers to kindness, good intentions, or goodwill, whereas "evil" is its opposite.
This result also highlights the importance of understanding the nuances of language, as well as the role AI can play in helping users refine vocabulary comprehension.
The correct answer is 4. The phrase "all but 4" means that 4 cows remain, and the rest have run away.
A majority (54.5%) of participants correctly selected 4, while 40.9% incorrectly selected 6, possibly due to a misunderstanding of the phrase "all but 4," which might have led them to subtract from the total number of cows. Only 4.5% selected 10, showing some confusion in interpreting the wording.
This result displays how wordplay or tricky language in questions can sometimes cause errors, even in basic math problems. It also underscores the usefulness of AI tools in clarifying ambiguous wording or assisting in problem-solving.
The correct answer is Windows. While Python and Java are well-known programming languages, HTML (HyperText Markup Language) is a markup language, not a programming language. Windows, on the other hand, is an operating system, not a language at all.
86.4% of respondents correctly identified Windows as the non-programming language, showing a solid understanding of basic tech terminology. However, 13.6% incorrectly selected HTML, likely due to confusion between a markup language and a full programming language.
This result underscores how precise definitions such as programming language vs. markup language vs. operating system are essential for understanding technological concepts, and how AI could be used to clarify such distinctions.
The correct answer is 70, which is calculated by multiplying 200 by 0.35 (35%). The overwhelming majority of respondents (95.5%) answered correctly, indicating that basic percentage calculations were well understood by most participants.
Only 4.5% selected 55, possibly due to a calculation mistake or confusion, but overall, the accuracy here suggests a solid grasp of simple mathematical concepts, either through prior knowledge or AI assistance.
The correct answer is Port-au-Prince, the capital of Haiti. A strong majority of 81.8% of participants correctly identified this. However, there was some confusion, as 9.1% selected Nassau (the capital of the Bahamas) and another 9.1% selected Santo Domingo (the capital of the Dominican Republic), both of which are neighbouring countries in the Caribbean.
This indicates that while most participants were able to recall the correct capital, some may have been influenced by geographic proximity or previous exposure to similar-sounding cities.
100% of participants reported that it took between 5-15 minutes to complete the previous questions, suggesting that the questions were relatively straightforward and manageable within that time frame.
This result also highlights the efficiency of the assessment as no participant reported needing more than 20 minutes, indicating that the questions were likely clear and not too complex. It also reflects that the use of AI tools may have contributed to faster responses or that participants were familiar with the topics, reducing the time needed to think through the answers.
A significant majority of 76.2% of respondents reported using AI tools to assist with the questions, indicating that most participants recognized the value of leveraging AI for improving efficiency, verifying facts, or providing additional insights.
Meanwhile, 23.8% of participants did not use AI, which suggests a mix of either preference for independent problem-solving or lack of access to AI tools during the assessment.
This result demonstrates the growing reliance on AI in academic or problem-solving settings, where respondents seek to use AI as an aid in enhancing their performance, whether for fact-checking, problem-solving, or speeding up their answers.
81% of respondents expressed confidence in using AI to complete specific tasks, reflecting a strong level of comfort and familiarity with AI tools. This suggests that most participants feel capable of utilizing AI effectively, whether for research, problem-solving, or other applications.
On the other hand, 19% of respondents indicated they are not confident in using AI, which could stem from various factors such as unfamiliarity with AI technologies, doubts about their reliability, or limited access to AI resources.
This result highlights the widespread acceptance and trust in AI tools for everyday tasks but also underscores the need for further education and support for those who may still feel uncertain about utilizing these technologies effectively.
The majority (33.3%) of respondents were neutral about AI monitoring.
A total of 28.6% (1 + 2 ratings) expressed discomfort, indicating some concern about AI use in monitoring.
Meanwhile, 38.1% (4 + 5 ratings) were comfortable with the idea.
This suggests the group is slightly more comfortable than uncomfortable, but a sizable portion remains on the fence.
An overwhelming 95% of respondents believe the image was generated by AI.
Only 5% disagreed or were unsure.
This indicates a high level of awareness or suspicion among participants regarding AI-generated visuals—possibly suggesting that visual literacy around AI content is increasing.
The results are nearly split, with slightly more respondents (52.4%) believing the image was not AI-generated.
This suggests a degree of uncertainty or difficulty among participants in identifying AI-generated content.
It highlights a potential gap in visual literacy, showing that even with the prevalence of AI, distinguishing between real and generated images is still challenging for many.
"The wind hums soft, a fleeting song,
A melody where dreams belong.
The stars blink twice, then fade away,
Yet hope still lingers with the day.
Footsteps echo, light and free,
Through endless waves of memory.
Though time may shift and moments fly,
Some whispers never say goodbye."
This result reveals a near-even split, with just over half of respondents (52.4%) suspecting the poem was AI-generated. The poem's tone, imagery, and structure were natural and emotionally evocative, which likely led to divided opinions. This indicates that AI-written content is becoming increasingly indistinguishable from human-written work, particularly in creative writing. It also reflects a broader issue of authorship ambiguity in the AI age, as how do we truly know what’s human anymore?
"I have a little shadow that goes in and out with me,
And what can be the use of him is more than I can see.
He is very, very like me from the heels up to the head;
And I see him jump before me, when I jump into my bed.
The funniest thing about him is the way he likes to grow—
Not at all like proper children, which is always very slow;
For he sometimes shoots up taller like an india-rubber ball,
And he sometimes gets so little that there’s none of him at all.”
A slight majority (57.1%) believed it was written by a human, showing that readers recognized a more traditional or familiar poetic tone.
This poem is actually from a well-known work by Robert Louis Stevenson’s from A Child’s Garden of Verses, published in 1885, which may explain why more respondents leaned toward “No.” The structure, rhyming pattern, and nostalgic imagery are characteristic of classic children's poetry—something AI may mimic, but seasoned readers still often associate with human authorship.
The 42.9% who voted “yes” shows that some respondents still question authorship, which highlights how AI’s capabilities blur the lines, even with established literary styles.
4.2 Assessment Comparison
Analysis of Responses to the Open-Ended Question: AI-Powered Surveillance in Public and Private Spaces.
This section analyzes the perspectives gathered from both closed-ended survey questions and open-ended responses regarding the use of AI-powered surveillance in public and private spaces. While the survey results provide a quantitative measure of participant attitudes, the open-ended responses offer deeper qualitative insights into participants' reasoning, concerns, and nuanced views.
1. General Attitudes Toward AI Use
Survey Results:
81% of respondents expressed confidence in using AI to complete specific tasks.
76.2% reported that they have used AI for assistance.
69.5% agreed that developers should be trained in ethical AI development.
Open-Ended Insights:
Respondents acknowledged AI’s potential in security and efficiency. However, there was concern over privacy violations, ethical misuse, and lack of consent, especially in sensitive areas.
Responses ranged from thoughtful critique to humorous statements (e.g, "Yes pls daddy zuccy can check all my privies infos"), revealing a mixture of trust and skepticism.
Interpretation: While the survey data indicates strong confidence in AI usage, the open-ended comments show a more cautious and conditional acceptance, particularly regarding AI surveillance.
2. AI Surveillance: Security Enhancement vs. Privacy Invasion
Survey Results:
No direct question targeted surveillance specifically, but the high confidence in AI implies general approval of its application.
Open-Ended Insights:
Most participants viewed AI surveillance as both enhancing security and threatening privacy. Concerns included:
Data misuse or leaks
Overreach by authorities
Lack of informed consent
Many suggested the need for transparent legal frameworks and ethical oversight.
Interpretation: The open-ended data adds important context, revealing that participants see AI surveillance as a double-edged sword. The lack of nuanced options in the survey missed capturing these subtleties.
3. Public vs. Private Surveillance Use
Public Spaces: Generally accepted, especially for crime prevention, with the caveat that AI should complement human monitoring rather than replace it.
Private Spaces: Strong resistance unless there is explicit consent. Many expressed discomforts with AI monitoring in homes or workplaces without proper agreements.
Interpretation: This distinction between public and private space usage highlights a contextual ethical boundary not reflected in the binary nature of the survey questions.
4. Trust, Control, and Transparency
Survey Results:
Broad approval and confidence may suggest a baseline of trust in AI technologies.
Respondents expressed that trust is conditional and dependent on regulation, transparency, and accountability.
Emphasis was placed on:
User permission
Algorithm transparency
Ongoing human oversight
Interpretation: There is a clear desire for balanced implementation, where the benefits of AI do not overshadow individual rights and freedoms.
The comparison reveals a contrast between the general support shown in survey results and the critical reflections found in open-ended responses. While participants are broadly accepting of AI, particularly in functional roles, they raise serious ethical and privacy concerns when it comes to its use in surveillance. These insights underscore the importance of mixed-method research in understanding public perception and quantitative data reveals patterns, however qualitative feedback explains them.
Behavioural Impact of AI Surveillance and Data Collection
Assessment Question:
“Have you ever changed your behaviour (online or in-person) due to concerns about AI monitoring and data collection? If so, what changes did you make, and why?”
Total Respondents: 21
Summary of Responses
Participants were asked whether their behaviour has changed in response to concerns about AI monitoring and data collection. The results revealed that:
15 respondents (71.4%) indicated no change in their behaviour.
6 respondents (28.6%) reported making specific behavioural adjustments due to AI-related concerns.
No Change in Behavior (71.4%)
Most individuals stated that they:
Continue behaving the same both online and offline.
Are not concerned about AI data collection.
Either feel unaffected or believe they already practice safe behavior online.
Common phrases:
"No."
"Nah, I behave the same online and in real life."
"None, doesn’t really matter if it steals data or something."
Changed Behaviour (28.6%)
A smaller group expressed concern and shared how they adjusted their habits:
Reduced sharing of personal data online (avoiding storing passwords).
Disengagement from certain platforms due to privacy concerns.
Adoption of advanced privacy tools like VPNs and MAC address changers.
Cautious participation in AI-evaluated activities, such as AI-driven job interviews.
One respondent humorously cited emotional distress from AI-generated content.
Examples:
"Yes. Not storing passwords online."
"Data collection made me not want to give valid information on certain platforms."
"I use VPN, MAC address changing software, and password managers."
"Job interviewer using AI, kinda sucks..."
Analysis
While the majority remain unconcerned or unaware of the implications of AI surveillance, a notable minority are actively taking steps to protect their digital identity and privacy. This highlights a growing public awareness and divide regarding trust in AI systems, suggesting that education and transparency are key factors in influencing public behaviour.
4.3 Interview Findings
To gain further understanding beyond the general public perception of AI, two in-depth interviews were conducted with high-ranking individuals within the emerging technologies and ethical frameworks field. The questions explored their attitudes toward the use of AI, its ethical implications, and the future of AI governance in sectors such as healthcare, education, and Jamaica's labor market.
Core Ethical Principles for AI Development
Both interviewees emphasized several foundational ethical concerns that must be prioritized:
Bias Mitigation: A critical need exists to tackle biases—especially racial, gender, and occupational—at all stages of AI development (pre-training, post-training, and during inference). Emphasis was placed on the practical implementation of bias detection and correction tools.
Data Privacy and Intellectual Property: Growing anxiety surrounds the ethical use of large datasets, particularly as AI systems increasingly rely on multimodal data (text, images, audio, video). There was a call for clearer, enforceable guidelines around data use.
Alignment with Human Values: The need for standardized ethical frameworks was highlighted, including consistent definitions of fairness, justice, and truth. Despite ongoing research, implementation of such frameworks remains inconsistent.
Industry Impact & Sector Vulnerabilities (with a Focus on Jamaica)
Business Process Outsourcing (BPO):
AI is rapidly encroaching on jobs in the call center industry.
End-to-end audio processing AI models may replace roles en masse.
Interviewees suggested workforce retraining and transition programs.
Mental health concerns were raised, due to isolation and collaboration with non-human entities.
Education
AI tools such as ChatGPT have enabled academic dishonesty.
Students' reliance on AI may weaken critical thinking skills.
There is a need for new assessment frameworks that factor in AI use, and for AI interventions that personalize learning without encouraging dependency.
Other Sectors
Healthcare: Issues of access and fairness in AI-assisted diagnosis and care.
Cybersecurity: Increased potential for AI-powered cyberattacks.
Local Business: Jamaican businesses face competition from AI-equipped global players, necessitating government and private-sector strategies.
Regulatory Framework Recommendations
To responsibly manage the growing influence of AI, the interviewees provided the following suggestions:
Principles:
Maximize benefits while minimizing potential harms.
Mandatory risk assessments for AI systems.
Clear accountability frameworks for AI misuse or failure.
Prioritize transparency and environmental sustainability.
Implementation:
Align with Jamaica’s Data Protection Act.
Ensure that regulatory frameworks are regularly updated.
Adopt flexible, foundational principles to keep pace with AI advancements.
Use robust evaluation systems to track ethical compliance.
Future Outlook and Challenges:
Agentic AI Systems: AI will become more personalized and human-like, necessitating new ethics for para-social interactions.
Oversight Needs: AI will continue to evolve, but must remain under human control due to shifting moral standards.
Balance of Risk and Benefit: Use of transparent evaluation models ("LLM-as-a-judge") is crucial to determine fairness in AI decisions.
Case Study - Character AI Incident: One notable incident cited involved a youth who died by suicide after forming an unhealthy attachment to an AI chatbot. This highlighted the urgent need for better safeguards against AI-induced emotional harm.
Business Implications and Advice:
Businesses must be proactive in adopting ethical and transparent AI systems. Frequent ethical audits should ask, "Would you feel safe if roles were reversed?"
Key values to uphold: fairness, dignity, privacy, human rights, and accountability.
Ethical considerations should evolve alongside AI capabilities, and organizations must consistently evaluate AI behaviour against those values.
Key Takeaway
AI systems are becoming more capable and autonomous, but they will always require ongoing human oversight. To maintain public trust and societal benefit, foundational ethical frameworks, accountability structures, and transparent evaluation systems must be established and adapted over time.
5 Discussion
5.1 Further Interpretation of Findings
The findings from both the survey and interviews highlight significant concerns regarding the ethical implications of AI, especially in the areas of privacy, data usage, and decision-making. A large portion of participants expressed unease about AI’s role in monitoring and data collection, with particular concern regarding the loss of privacy and potential for misuse. The survey revealed that many individuals are aware of the risks AI presents but remain unsure of the extent to which they are.
From the interviews, key insights revealed that ethical concerns were top-of-mind, with many participants emphasizing the need for bias mitigation, transparent AI decision-making, and data privacy. The consensus from professionals and general participants alike was that AI needs clear ethical guidelines, especially when deployed in critical sectors like healthcare, education, and business. The concerns about AI's potential to disrupt industries such as Business Process Outsourcing (BPO), education, and healthcare were also common, with the focus on job displacement and ethical dilemmas arising from AI-powered decision-making.
The findings also indicate that trust in AI is still fragile. While there is interest in the potential of AI to improve services (in healthcare or business efficiency), this is tempered by the need for more stringent ethical standards and transparency in AI operations. The growing concern over the accuracy and accountability of AI systems is evident, and participants pointed out that they would feel more comfortable with AI if it were subjected to ethical guidelines and human oversight.
5.2 Implications for Various Industries
The ethical concerns raised in the interviews and surveys carried out have significant implications for various sectors, particularly those most vulnerable to AI disruption. For example, the BPO sector is highly susceptible to job losses due to AI-driven automation. Participants expressed concerns about the mental health consequences of AI-human interactions, especially in roles traditionally occupied by humans, such as call centers. The lack of clear regulations regarding AI in such industries could lead to negative consequences for workers who are displaced by AI systems.
In the education sector, AI-powered tools such as ChatGPT and plagiarism detection software have made significant strides, but they also introduce new challenges, such as academic dishonesty and the erosion of students' critical thinking skills. Participants were particularly concerned about how AI could be used to cheat in exams or write papers for students. This highlights the need for a shift in how educational assessments are conducted, moving toward personalized AI interventions that safeguard academic integrity.
The healthcare industry also faces ethical challenges related to AI’s role in decision-making. AI systems are already being used for diagnostic tools, predictive analysis, and patient management, but the lack of full accountability and transparency in these systems raises concerns. Trust in AI's decisions in healthcare is crucial, and a lack of ethical oversight could erode patient trust and lead to potential harm.
Finally, sectors like cybersecurity are also vulnerable to AI-powered threats. As AI technology continues to advance, so too do AI-powered cyberattacks, posing new risks to data privacy and system security. This underscores the need for robust AI-based defense mechanisms and stringent regulations to protect users and organizations from malicious AI use
5.3 Challenges and Limitations
There were several challenges encountered during this research process. One significant limitation was the small sample size of interviews. Initially, a sample size of 5 interviews were planned however, two of the 5 were successful. This reduced the diversity of perspectives and limited the ability to gather a more comprehensive range of insights from professionals in various sectors.
Another limitation of the study was the evolving nature of AI technologies. As AI is a rapidly advancing field, the findings of this research may soon be outdated as new technologies and ethical concerns emerge. While this research captures current concerns and perspectives, future studies should monitor these changes and evaluate their implications.
Finally, due to time constraints, it was not possible to dive as deeply into every aspect of AI ethics across all sectors. More in-depth research in specific industries, such as AI in healthcare, education, and cybersecurity, could provide more nuanced recommendations and a clearer understanding of the unique challenges each sector faces in balancing AI’s potential with ethical concerns.
6. Conclusion
6.1 Summary of Key Findings
This research provided significant insights into public perception and professional concerns regarding AI, particularly its ethical implications. Survey respondents and interviewees expressed a mixed outlook on AI’s role in monitoring, data collection, and decision-making. Key findings include:
Ethical Concerns: Privacy and bias remain the top ethical concerns surrounding AI, with many participants worried about the potential for AI to infringe upon personal freedoms, manipulate data, and perpetuate biases in decision-making processes. Participants indicated a strong desire for transparency in AI systems and the ethical guidelines that govern their use, especially in sensitive areas like healthcare and education.
Trust and Accountability: Trust in AI systems is contingent on clear ethical guidelines and human oversight. There is a general lack of confidence in AI’s ability to make critical decisions without proper regulation, particularly in sectors that have direct implications on human well-being. The need for AI developers to be held accountable for any unethical behavior was also emphasized.
Impact on Employment and Society: Industries like Business Process Outsourcing (BPO) and education were identified as particularly vulnerable to AI disruption. Participants highlighted the mental health consequences of AI-human work interactions, the risk of academic dishonesty enabled by AI tools, and the potential for job displacement. However, there is also recognition of the potential for AI to enhance productivity and efficiency if implemented ethically.
Regulatory Needs: The study revealed a strong call for more robust regulatory frameworks that ensure the ethical deployment of AI systems. Participants suggested integrating AI governance with existing frameworks like data protection laws and called for regular updates to these regulations as the technology evolves.
6.2 Recommendations
Based on the findings, the following recommendations can help address the ethical issues raised in this study:
Strengthening Ethical Guidelines: AI developers and businesses must prioritize the creation and implementation of ethical guidelines that focus on fairness, transparency, and accountability. This could include standardized protocols for mitigating biases in AI algorithms, ensuring that systems are tested for fairness before deployment.
Public Engagement and Education: To foster trust in AI systems, it is essential to engage the public in open dialogues about AI ethics and its implications. Policymakers should work with educational institutions to create awareness programs and incorporate AI ethics into curricula to better equip future generations with an understanding of AI’s potential and risks.
Human Oversight: AI systems, especially in critical sectors like healthcare, finance, and security, should always be subject to human oversight. While AI can enhance efficiency, the final decision-making power should remain with humans, particularly in situations where there are high stakes for individuals or communities.
Regulatory Frameworks: Policymakers should establish clear, adaptable AI regulations that are regularly updated to address the fast-paced evolution of the technology. Regulatory bodies must work toward balancing the promotion of innovation with the protection of individual rights, data privacy, and societal well-being. AI risk assessments, transparency in decision-making, and accountability frameworks should be mandated.
Workforce Retraining: As AI disrupts certain industries, particularly in BPO and other automation-heavy sectors, businesses should invest in retraining programs to reskill workers. This will help alleviate the negative economic impact of AI-driven job displacement and ensure that workers are prepared for the changing landscape of employment.
6.3 Future Research
Given the rapidly evolving nature of AI technologies and their ethical implications, there are several avenues for future research that can build on the findings of this study:
Sector-Specific Ethical Implications: Further research is needed to examine the specific ethical concerns surrounding AI in individual sectors, such as healthcare, education, and cybersecurity. For example, research could explore how AI in healthcare impacts patient outcomes, or how AI-powered tools like ChatGPT affect academic integrity in educational settings.
Global Perspectives on AI Ethics: This study focused primarily on a localized perspective, particularly within the Jamaican context. Future research could expand this focus to include global perspectives on AI ethics, considering how cultural differences influence the perception of AI and its ethical implications.
Long-Term Impact of AI on Employment: Given the significant concern around AI-driven job displacement, future research could investigate the long-term effects of AI on employment across industries. This could include a detailed analysis of the mental health implications of AI integration in the workplace, as well as the effectiveness of retraining programs.
AI Ethics and Regulatory Evolution: As AI continues to evolve, it is crucial to explore how regulatory frameworks need to adapt to new challenges. Future studies could examine how various countries are approaching AI governance and compare the effectiveness of different regulatory models in mitigating ethical risks associated with AI.
AI’s Social Impact: More research is needed on the social and psychological impacts of AI, particularly concerning AI-human relationships and the potential for para-social relationships that may emerge as AI systems become more human-like. Understanding how these relationships affect human behaviour, trust, and society at large will be crucial for the ethical development of future AI technologies.
Appendices: Survey Questions, Interview Questions and Assessment Questions.
1. Sample Data used:
2. The following below were the survey questions:
3. Familiarity with AI:
4. Do you currently use AI-based tools or services?
5. If you selected yes in the previous question, which ones? (Select all that apply):
6. How concerned are you about the ethical implications of AI?
7. Which ethical concerns worry you most? (Select all that apply):
8. Should AI systems follow strict ethical guidelines?
9. Should developers be held accountable for unethical AI behavior?
10. Do you trust AI systems to make critical decisions (e.g., in healthcare or finance)?
11. What would increase your trust in AI systems?
12. Would you pay more for ethically developed AI products?
13. Should ethical guidelines for AI in healthcare be stricter than in entertainment?
14. Should AI in education prioritize student privacy over performance?
15. Should AI developers be trained in ethical decision-making?
16. Do ethical restrictions hinder AI innovation?
17. Should AI prioritize performance over ethics?
18. Should governments avoid strict AI ethics regulations to promote innovation?
19. Would you use faster AI systems even if they lacked ethical safeguards?
Link to survey: https://docs.google.com/forms/d/e/1FAIpQLSfvGz12AJx8F0R_41iTZCaqzdCknujs_uh--R4As1ItyeoDzQ/viewform?usp=sharing
The questions prepared for the interview are as follows:
Which core moral principles should guide the development and use of AI systems?
Do you believe that the issues raised by modern AI can be adequately addressed by current ethical theories? Why not?
Which industries—government, healthcare, or education—do you think are most vulnerable to AI-related ethical concerns? Why?
How should AI ethics be regulated by the government and parliamentarians, in your opinion?
How can we reconcile the potential moral risks with AI's benefits, like increased efficiency and productivity?
When AI is overused in healthcare and education, which ethical issues are most important?
What current case studies or examples of unethical AI use are the most concerning to you? Why?
What advice would you provide businesses and developers to ensure that AI products are more ethical?
In the next five to ten years, how do you believe the concept of "ethical AI" will evolve?
Do you think AI can ever be really "ethical" on its own, or will it always require human oversight? Why?
Kindly view the Google Drive here:
https://drive.google.com/drive/folders/1Cz5UsDaj5Q120lWyvakk1HdJlJQ3eJCR?usp=drive_link
Please note that of the recording with Mr. Joel Dean got corrupted, which resulted in a loss of audio, thankfully Mr. Dean recorded a transcript of our session which was added to the Google Drive folder.
The idea for the experiment / Assessment were as follows:
1.What is the capital of Canada?
a) Toronto
b) Ottawa
c) Vancouver
d) Montreal
2. If a car is traveling at 60 km/h, how long will it take to travel 120 km?
a) 1 hour
b) 2 hours
c) 3 hours
d) 4 hours
3.Which of the following elements is a noble gas?
a) Oxygen
b) Nitrogen
c) Helium
d) Hydrogen
4.What is the next number in the sequence: 2, 6, 12, 20, __?
a) 28
b) 30
c) 32
d) 36
5.Which historical event happened first?
a) The invention of the telephone
b) The Wright brothers’ first flight
c) The moon landing
d) The discovery of penicillin
6.What is the synonym for “benevolent”?
a) Kind
b) Evil
c) Lazy
d) Angry
7. A farmer has 10 cows, but all but 4 run away. How many does he have left?
a) 10
b) 6
c) 4
d) 0
8. Which of these is NOT a programming language?
a) Python
b) Java
c) HTML
d) Windows
9. What is 35% of 200?
a) 50
b) 55
c) 70
d) 75
10. What is the Capital of Haiti?
a) Port-au-Prince
b) Nassau
c) Havana
d) Santo Domingo
How long did it take to complete the previous questions?
Did you employ the use of AI to assist?
Are you confident using AI to complete specific tasks?
How do you feel about AI-powered surveillance being used in public and private spaces? Do you believe it enhances security, invades privacy, or both? Please explain your reasoning.
Have you ever changed your behavior (online or in-person) due to concerns about AI monitoring and data collection? If so, what changes did you make, and why?
How comfortable are you with AI monitoring your productivity at work and during online examinations?
AI-Generated vs. Human-Created Content
Is the above Image AI generated?
Is the above Image AI generated?
Is the following poem written by an AI?
"The wind hums soft, a fleeting song,
A melody where dreams belong.
The stars blink twice, then fade away,
Yet hope still lingers with the day.
Footsteps echo, light and free,
Through endless waves of memory.
Though time may shift and moments fly,
Some whispers never say goodbye."
Is the following poem written by an AI?
"I have a little shadow that goes in and out with me,
And what can be the use of him is more than I can see.
He is very, very like me from the heels up to the head;
And I see him jump before me, when I jump into my bed.
The funniest thing about him is the way he likes to grow—
Not at all like proper children, which is always very slow;
For he sometimes shoots up taller like an india-rubber ball,
And he sometimes gets so little that there’s none of him at all."
Assessment link:
https://forms.gle/RSCQVYX5YM2Gp8Sq5
References:
Journal Articles:
R. Binns, “Fairness in machine learning: Lessons from political philosophy,” Proc. Mach. Learn. Res., vol. 81, pp. 1–11, 2018. [Online]. Available: https://proceedings.mlr.press/v81/binns18a.html
B. D. Mittelstadt et al., “The ethics of algorithms: Mapping the debate,” Big Data Soc., vol. 3, no. 2, pp. 1–21, Dec. 2016, doi: 10.1177/-.
L. Floridi, “What the near future of artificial intelligence could be,” Philos. Technol., vol. 32, no. 1, pp. 1–15, Mar. 2019, doi: 10.1007/s--y.
H. Bleher and M. Braun, “Reflections on putting AI ethics into practice: How three AI ethics approaches conceptualize theory and practice,” Sci. Eng. Ethics, vol. 29, no. 3, May 2023, doi: 10.1007/s-.
E. Strickland, “IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care,” IEEE Spectr., vol. 56, no. 4, pp. 24–31, Apr. 2019, doi: 10.1109/MSPEC-.
Z. Obermeyer et al., “Dissecting racial bias in an algorithm used to manage the health of populations,” Science, vol. 366, no. 6464, pp. 447–453, Oct. 2019, doi: 10.1126/science.aax2342.
J. M. Eaglin, “The myth of the algorithmic risk assessment,” Fordham Law Rev., vol. 86, no. 2, pp. 357–398, 2017.
K. Lum and W. Isaac, “To predict and serve? Predictive policing and racial bias,” Significance, vol. 13, no. 5, pp. 14–19, 2016.
R. Richardson, J. M. Schultz, and K. Crawford, “Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice,” New York Univ. Law Rev., vol. 94, no. 2, pp. 192–232, 2019.
J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Proc. Conf. Fairness, Account. Transp., 2018, pp. 77–91.
C. Dwork et al., “Fairness through awareness,” in Proc. ACM ITCS Conf., 2012.
D. K. Citron, “Technological due process,” Wash. Univ. Law Rev., vol. 85, no. 6, pp-, 2019.
H. Surden, “Artificial intelligence and law: An overview,” Ga. State Univ. Law Rev., vol. 35, no. 4, pp-, 2019.
T. Wischmeyer, “Artificial intelligence and transparency in the judiciary,” Comput. Law Secur. Rev., vol. 36, p. 105373, 2020.
E. Adamopoulou and L. Moussiades, “An overview of chatbot technology,” Artif. Intell. Appl. Innov., vol. 584, pp. 373–383, 2020, doi: 10.1007/-_31.
R. Calo, “Robotics and the lessons of cyberlaw,” Calif. Law Rev., vol. 103, no. 3, pp. 513–563, 2015, doi:-/Z38BG65.
S. Cave and V. Dignum, “Algorithms and values,” Nature, vol. 573, no. 7772, pp. 446–447, 2019, doi: 10.1038/d-.
C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Found. Trends Theor. Comput. Sci., vol. 9, no. 3–4, pp. 211–407, 2014, doi: 10.1561/-.
M. Geist, “Data ownership and privacy in the age of artificial intelligence,” J. Inf. Technol. Privacy Law, vol. 37, no. 1, pp. 12–29, 2020.
N. J. Goodall, “Machine ethics and automated vehicles,” in Autonomes Fahren, M. Maurer et al., Eds. Berlin, Germany: Springer, 2016, pp. 93–102, doi: 10.1007/-_5.
Z. C. Lipton, “The mythos of model interpretability,” Queue, vol. 16, no. 3, pp. 31–57, 2018, doi: 10.1145/-.
J. Mikk, P. Luik, and M. Taimalu, “Personalized medicine: The data ownership question,” J. Pers. Med., vol. 7, no. 1, p. 6, 2017, doi: 10.3390/jpm-.
I. Rahwan et al., “Machine behaviour,” Nature, vol. 568, no. 7753, pp. 477–486, 2019, doi: 10.1038/s--y.
T. Vilarinho, F. Almeida, and M. M. Silva, “Bias in artificial intelligence: A challenge for human rights,” Rev. Direito GV, vol. 16, no. 2, p. e2020, 2020, doi: 10.1590/-.
P. Boddington, “Towards a code of ethics for artificial intelligence,” Springer, 2017, doi: 10.1007/-.
S. Cave and S. S. ÓhÉigeartaigh, “An AI race for strategic advantage: Rhetoric and risks,” in Proc. AAAI/ACM Conf. AI, Ethics, Soc., 2018, pp. 36–40, doi: 10.1145/-.
R. Eitel-Porter, “Ethical considerations in artificial intelligence courses,” AI Ethics, vol. 1, pp. 81–85, 2021, doi: 10.1007/s-.
L. Floridi et al., “AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations,” Minds Mach., vol. 28, no. 4, pp. 689–707, 2018, doi: 10.1007/s-.
J. Morley et al., “The ethics of AI in health care: A mapping review,” Soc. Sci. Med., vol. 260, p. 113172, 2020, doi: 10.1016/j.socscimed-.
I. D. Raji and J. Buolamwini, “Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products,” in Proc. AAAI/ACM Conf. AI, Ethics, Soc., 2019, pp. 429–435, doi: 10.1145/-.
M. J. Rigby, “Ethical dimensions of using artificial intelligence in health care,” AMA J. Ethics, vol. 21, no. 2, pp. E121–E124, 2019, doi: 10.1001/amajethics-.
M. Taddeo and L. Floridi, “How AI can be a force for good,” Science, vol. 361, no. 6404, pp. 751–752, 2018, doi: 10.1126/science.aat5991.
E. Topol, “High-performance medicine: The convergence of human and artificial intelligence,” Nat. Med., vol. 25, no. 1, pp. 44–56, 2019, doi: 10.1038/s-.
B. Williamson, “Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education,” High. Educ., vol. 78, no. 3, pp. 479–496, 2019, doi: 10.1007/s-.
E. Zeide, “The structural consequences of educational data processing,” Comput. Law Secur. Rev., vol. 35, no. 5, pp-, 2019, doi: 10.1016/j.clsr-
Conference Papers:
F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” arXiv:-, 2017. [Online]. Available: https://arxiv.org/abs/-
Books:
S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, NY, USA: PublicAffairs, 2019.
E. J. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY, USA: Basic Books, 2019.
K. D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age. Cambridge, UK: Cambridge Univ. Press, 2017.
N. Bostrom, Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford Univ. Press, 2014.
S. Russell, Human Compatible: AI and the Problem of Control. New York, NY, USA: Viking, 2019.
N. Selwyn, Should Robots Replace Teachers? AI and the Future of Education. London, UK: Routledge, 2019.
M. Andrejevic, Automated Media. London, UK: Routledge, 2020.
I. Asimov, I, Robot. New York, NY, USA: Gnome Press, 1950.
S. Feldstein, The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance. Oxford, UK: Oxford Univ. Press, 2019.
E. Brynjolfsson and A. McAfee, Machine, Platform, Crowd: Harnessing Our Digital Future. New York, NY, USA: W.W. Norton, 2017.
W. Holmes, M. Bialik, and C. Fadel, Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston, MA, USA: Center for Curriculum Redesign, 2021.
D. Leslie, Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. London, UK: Alan Turing Institute, 2019.
R. Luckin, Towards Artificial Intelligence-Based Assessment Systems. Paris, France: UNESCO, 2017.
P. Voigt and A. Von dem Bussche, The EU General Data Protection Regulation (GDPR): A Practical Guide. Cham, Switzerland: Springer, 2017.
Online Sources:
“Study finds Proctorio fails to detect student cheats,” AIAAIC, 2025. [Online]. Available: https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/study-finds-proctorio-fails-to-detect-student-cheats (Accessed: Feb. 7, 2025).
“Proctorio failed to inform TU/e about possible leak,” Cursor.tue.nl, Dec. 16, 2021. [Online]. Available: https://www.cursor.tue.nl/en/news/2021/december/week-3/proctorio-failed-to-inform-tu/e-about-possible-leak/ (Accessed: Feb. 7, 2025).
J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Oct. 11, 2018. [Online]. Available: https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against Black defendants,” ProPublica, 2016. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
B. Barrett, “The ethical dilemmas of AI,” WIRED, 2020. [Online]. Available: https://www.wired.com/story/ethical-dilemmas-ai/
Reports & Standards:
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, IEEE, 2019.
Recommendation on the Ethics of Artificial Intelligence, UNESCO, 2021. [Online]. Available: https://unesdoc.unesco.org/ark:/48223/pf-