The question of ethics in AI development: Where are we going?
The question of ethics in AI development: Where are we going?
Artificial intelligence (AI) is a technology that is expanding at an unprecedented pace around the world today. Its amazing capabilities, high efficiency, and added convenience in daily life have attracted everyone's attention. AI is opening doors to new opportunities and innovations in all fields, including health, education, business, and communication. But under the cover of these positive aspects, one very important issue is often overlooked - the ethical use of AI.
As technology develops, the challenges associated with it also become more complex. AI collects and analyzes large amounts of data, automates decisions, and is able to draw conclusions in a very short time.
These capabilities have made AI powerful. But the greater the power, the more sensitive the responsibilities become. Any misuse, erroneous decisions, or biased data can negatively affect not only the privacy of individuals, but also the structure of society.
Use of AI in sensitive areas and high risks
AI is currently being applied from health, information management to the courts. In the health sector, AI is helping with disease identification, treatment methods and patient care. In the justice sector, AI is helping with evidence-based case analysis.
Data in such areas is extremely sensitive. If the data is flawed, it can lead to wrong decisions or serious breaches of privacy. Earlier, sketches were used to identify criminals. But today, AI can do everything from facial recognition to behavioral analysis. But there are risks involved. For example, there is a possibility that an innocent person may be wrongly convicted. It can also create major challenges in privacy, fairness and accountability.
Therefore, the more powerful the technology, the more it needs to be kept within ethical boundaries.
Data-based AI: But what if there is bias in the data?
The effectiveness of AI depends on the quality of the data. But if the data itself is wrong or biased, the results of AI will also be unfair. For example, if a dataset contains data that says ‘Asians are naturally good at math’, it will produce unfair and discriminatory results.
This is not just a theoretical issue. There was an incident in the early stages of AI where the system incorrectly labeled dark-skinned people as ‘guerillas’ while recognizing fair-skinned people as human.
This caused a lot of outrage, and it was completely justified. The root cause of this was an unbalanced and fragmented dataset. Where the data on white-skinned people was different from the data on dark-skinned people.
Such biased data is not a simple error. Such data can reinforce racial discrimination and discrimination. These incidents show that AI is not inherently neutral, it reflects the data we feed it.
Ways to eliminate bias
Balanced datasets: ensuring representation of different cultures, genders, and communities.
Diverse datasets: Not limited to a specific group, but covering the full range of human experience.
Continuous auditing: Regularly testing the AI system for bias and retraining it with improved data.
For example, the need for a balanced dataset is even more important when evaluating thousands of resumes. When there are thousands of applicants, it is extremely difficult to manually review all of them. In such cases, the use of AI tools can automate the resume selection process. But here too, the problem of bias can be a big challenge.
A real-world example of how unbalanced data can harm an AI system is the ‘Amazon Hiring Algorithm’ developed in 2018. This system was designed to automatically select resumes.
However, during the audit, it was discovered that the system had learned to prioritize male candidates due to the high number of male-centric words in the training data. As a result, female resumes were automatically rejected. After this bias was discovered through auditing, the system was shut down for improvement. This incident clearly shows how dangerous unbalanced datasets can be.
Similarly, AI can help customers recommend different products. But it can only recommend those items that are available in its dataset. For example, if the popular e-commerce platform Daraz does not have any brand listed, then AI cannot recommend those products. Since Daraz is an online shopping portal, it can only suggest items available in its system. This is also a form of bias that arises due to unbalanced and limited datasets.
During my recent visit to India, I had the opportunity to learn about a project called Bhasini AI. Bhasini AI is mainly a real-time language translation system. For example, converting Hindi into Urdu, English or other languages. Since this project uses many different languages, creating a dataset is very challenging.
For example, if the system does not contain any data related to the Nepali language, it cannot translate any content into Nepali. This shows how the lack of a balanced dataset can be detrimental to It creates bias and reduces impartiality. Therefore, we should always focus on creating balanced datasets.
Ethical aspects
AI development often involves collecting large amounts of data. This raises the important concept of informed consent. What if we collect user data and log data without someone’s consent? What if privacy issues arise or access controls are not properly managed?
Such issues can lead to bias, privacy leaks, and security risks. To prevent this, data should always be collected only with informed consent from the person concerned. For example, if patient data is used in a health information system, the patient’s consent is required. Paying attention to these ethical responsibilities helps reduce privacy concerns and negative consequences.
Another important ethical concern with AI is its misuse and the spread of misinformation. Fake videos or audio created by AI (deepfakes and synthetic propaganda), which spread misinformation. Ghostwriting is also a growing problem. Where people rely on AI to complete assignments or collect data without knowing the truth. This affects academic integrity. Cyberbullying, impersonation, and fake identity creation using AI are additional threats.
For example, a report published in 2024 revealed that deepfakes were used to interfere in elections. Where voices were cloned and robocalls were used to mislead voters by impersonating political leaders.
Governance, regulation, and accountability
Another important ethical aspect of AI is governance and accountability. Who is responsible when AI tools cause harm? This requires a clear government role.
A study conducted in the UK found that about 70% of people considered the use of facial recognition for airport security to be beneficial, but many expressed concerns about job losses, the use of AI in border control, and the risk of police convicting the wrong person. Such misidentifications can be repeated over and over again, making it a serious ethical problem.
Another major challenge is language dominance. Although there are over 7,000 languages in the world, most online content is limited to 10 languages, mainly English, Spanish, Hindi and Chinese. This makes it difficult for AI tools to accurately represent diverse cultures and languages. In the case of Nepal, this makes communication and learning more difficult and clearly reveals the biases arising from linguistic imbalances.
Another important concern is validation. Many AI tools are being released at a rapid pace. But how do we know whether their results are accurate, misleading or wrong? Just as we trust ISO-certified products, we need a system to know whether an AI tool has been properly validated.
Misleading results and security
The misleading or wrong results that AI can produce also need to be seriously considered. If someone is given an AI tool and does not know how to use it, the information they receive may not be accurate. Therefore, it is important to be aware of how to use such tools. It is imperative that we develop trustworthy, reliable, and safe AI systems. We must ensure that AI does not spread misinformation, does not harm users, and that the risks arising from misuse are minimized. Addressing all these issues is the key to developing future-oriented and responsible AI.
In conclusion
While the debate over whether the problem is AI-based or not continues, there are many real problems. To solve these problems, we must be able to access the source of the data. While watermarking helps in identifying ownership, it is also necessary to develop media forensics and authenticity standards to verify the source.
These concerns are particularly relevant to today’s Gen Z generation. AI is pushing the world in new directions. Its impact is clearly visible in everything from health, education, administration, business, to employment. But to realize its benefits, ethics, transparency, privacy, data fairness, protection from misinformation, and government regulation must be given equal priority. Policy discussions on AI have already begun in Nepal, which is a positive step. What is needed now is clear standards, definitions of responsibility, bases for punishment, and the creation of a sustainable system for safe and trustworthy AI development.
Comments
Post a Comment
If you have any doubts. Please let me know.