top of page

Humans vs. AI: how ethics are formed and where they fail

Writer's picture: Ken PhilipsKen Philips


The rise of artificial intelligence forces us to confront a fascinating and urgent question: how do we and AI compare when it comes to ethics? We develop our sense of right and wrong over time, shaped by experience, culture, and emotion. AI, on the other hand, follows a different path, one dictated entirely by programming and data. While human morality evolves, AI is only as ethical as the rules and objectives it is given. But what happens when things go wrong? Where do ethical failures come from, and how can they be prevented?


The origins of ethics

For us, ethics do not appear out of nowhere. We are not born with a detailed moral code, but we do have instincts—traces of evolutionary survival strategies that encourage cooperation and fairness. Even as infants, we show signs of empathy, reacting to distress in others as if it were our own. As we grow, culture, education, and personal experience refine these instincts, shaping them into more sophisticated moral beliefs. Religion, philosophy, and legal systems provide guidelines, while introspection and experience allow us to adjust and redefine our own ethical principles. Over time, our morality shifts, evolving alongside society itself.


AI, however, does not develop its ethics naturally. It does not feel, reflect, or question. Instead, its behavior is determined entirely by external programming, optimization goals, and training data. Unlike a child learning from parents and teachers, AI follows patterns extracted from vast amounts of information—some of which may be flawed, biased, or outdated. It does not understand fairness or justice in any meaningful way; it simply applies rules and weighs probabilities based on what it has been trained to prioritize. Left unchecked, AI will optimize ruthlessly, regardless of whether its conclusions align with human values.


How ethics are enforced

Our morality is governed by both internal and external forces. Internally, emotions such as guilt, pride, and empathy act as invisible guardrails, nudging us toward ethical behavior. Externally, societal expectations, laws, and consequences help enforce moral norms. Even when our personal desires clash with ethical ideals, the presence of external accountability—punishment for wrongdoing, praise for good deeds—plays a crucial role in keeping our behavior in check.

AI, in contrast, relies entirely on external constraints. It has no conscience, no innate sense of right and wrong. Developers must build ethical safeguards into its programming, setting boundaries and defining acceptable outcomes. Regulatory frameworks and validation processes attempt to ensure that AI behaves responsibly, but the effectiveness of these measures depends on the foresight and diligence of the people designing them. If ethics are not explicitly encoded, AI has no reason to prioritize them. Unlike us, it will not experience regret, moral dilemmas, or second thoughts. It will simply execute its task as efficiently as possible—whether or not that task aligns with ethical principles.


Why ethics fail

Despite the best efforts of society, both we and AI fail ethically, but for very different reasons. Our ethical failures often stem from self-interest, bias, or pressure. The temptation of personal gain can override moral considerations, leading us to act dishonestly or selfishly. Bias, whether conscious or unconscious, distorts judgment, making it harder to see ethical issues clearly. In moments of stress or group pressure, even those of us with strong moral values can make questionable choices, swayed by circumstances beyond our control.

AI, on the other hand, does not fail because of selfishness or emotion—it fails because of flawed design. It optimizes for the goals it has been given, even if those goals produce unethical outcomes. If trained on biased data, it will absorb and amplify that bias without hesitation. If it lacks the ability to interpret context, it may apply rules rigidly, missing the nuances that we naturally consider. Unlike us, AI will never pause to ask, “Is this the right thing to do?” It will simply follow the logic it has been given, for better or worse.

Consider an AI system used for hiring. If it is trained on past hiring data that reflects historical discrimination, it will perpetuate those same biases, filtering out candidates based on patterns that society is actively trying to correct. AI is not “sexist” or “racist” in the way that we might be, but it will continue making biased decisions if the data it has learned from contains those biases. If no one intervenes, it will continue making flawed decisions indefinitely.


What this means for the future

For us, ethical growth is an ongoing process. Our values shift over time, adapting to new knowledge, cultural changes, and deeper reflection. The responsibility of shaping AI ethics falls on us—on the developers, regulators, and users who decide how these systems should function. If AI is to align with our values, ethical considerations must be embedded into its very foundation.

This means designing AI systems that are transparent, so that their decisions can be understood and challenged. It means ensuring accountability, so that when AI makes harmful mistakes, there are mechanisms in place to correct them. It also means allowing for adaptability, so that AI ethics can evolve alongside society, rather than remaining frozen in outdated assumptions.


Humans and AI: a new ethical balance

The reality is that AI does not possess true ethics—at least, not in the way that we do. It does not weigh moral dilemmas, experience guilt, or debate questions of right and wrong. But that does not mean AI is doomed to be unethical. If designed thoughtfully, AI can complement our judgment, offering consistency where we are prone to bias and efficiency where we struggle with complexity.

The future of ethics in an AI-driven world will not be about replacing human morality but enhancing it. If this balance is struck correctly, AI will not make us less ethical—it will help us become better.

2 views0 comments

Recent Posts

See All

Comments


Subscribe to Our Newsletter

  • White Facebook Icon

© 2024 by Ken Philips

bottom of page