Ethical Assessment

Speed is the main differentiating factor between technological advancement over the last 1000 years and the last 100. Automation technology is about to reach the right-hand limit of the exponential development curve thanks to artificial intelligence powered by large neural networks. Once this limit is reached, reality risks diluting, to the point of dissolving, into a broth of imitative confusion, devoid of thought, creativity and centers of interest.

Synthetic content is an extremely powerful tool for political, moral and social disinformation, which we are unable to recognize and counter in any way. This framework potentially makes the formation of balanced and free opinions unfeasible. The superficial risk is undermining freedom of thought and consequently the democracy of which it is a pillar. The deep and systemic risk is definitively compromising the ability for free thought, that is, the ability to go beyond patterns to find new balances and centers of symmetry that seemed unrealizable.

The real problem we are facing in these months does not concern AI itself, which is the technology we needed in such a frenetically unbalanced context towards the extremes, but rather the tools we DO NOT have to manage it, control it, validate it and use it correctly. Ethical assessment, i.e. the verification of the goodness of the entire supply chain of AI-powered tools, needs greater attention, both from a technological and ethical point of view. The pillars of ethical assessment are two: standardization and robustness.

Standardized Assessments

The lack of standardization in the definitions of ethical AI makes it difficult to establish a general framework for evaluating the latest developments. The major developers seem to be acting in an isolated and independent manner, testing their models against different assessment benchmarks. This approach complicates efforts to systematically compare the risks and limitations of the largest artificial intelligence models currently available. This fragmentation forces us to ask: how can we achieve a shared understanding of the ethical implications in the development and use of AI if the assessments themselves are fragmented?

The first standardized tool to be developed is the philosophical-conceptual one, with the aim of building a coherent and shared ethical framework. This conceptual framework must be flexible, so it must not be based on prepackaged assumptions but must be guided by fundamental questions to which a series of possible different answers are given that take into account the operational context. The main questions may be:

  1. What is the meaning of artificial intelligence for society?
  2. How can we define AI as an autonomous and responsible system?
  3. What role do social expectations play in creating "moral" AI?
  4. How can standardization influence the social representation of AI?
  5. How can we assess the environmental and social impact of AI?
  6. What is the role of institutions and big companies in defining ethical goals for AI?

The second standardized tool we need is a technological one. To date, various benchmarks have been proposed to assess responsible artificial intelligence, useful tools that however present a long series of significant limitations. These tests aim to assess a system's ability to reason about ethical dilemmas and make decisions consistent with moral principles. However, designing such tests is complex and subject to criticism regarding their objectivity and ability to capture the complexity of real ethical situations.

For a complete ethical assessment of AI, we could consider a combination of technological tools, including:

  1. Ethical reasoning benchmarks that assess the consistency of AI decisions with moral principles.
  2. Debugging and explanation tools that allow understanding the AI's decision-making process and identifying potential biases or undesirable behaviors.
  3. Continuous monitoring systems that evaluate the ethical performance of AI in real-world contexts and identify any issues as they emerge.
  4. Human feedback mechanisms that allow users to report undesirable AI behaviors and contribute to the continuous improvement process.
  5. Simulation tools that allow testing the AI in hypothetical scenarios and assessing the potential consequences of its actions.

The IEEE's "Framework for AI Development" can be a good starting point for finding technologies capable of guiding the responsible development of AI algorithms. However, there is still a lot of work to be done to overcome limitations and defects that have already been identified by researchers and experts:

  1. Lacks specificity: The framework is too generic and does not provide specific guidance on how to implement best practices for developing AI algorithms.
  2. Not detailed enough for emerging technologies: The framework is not detailed enough for emerging AI technologies, such as machine learning and deep learning algorithms.
  3. Lacks implementation tools: The framework does not provide concrete implementation tools to help put ethical best practices for developing AI algorithms into practice.
  4. Not binding enough: The framework is not binding enough and leaves a lot of room for interpretation, which can lead to problems in its practical application.
  5. Lacks effective governance: The framework does not clearly establish how to govern the development and use of AI algorithms in an ethical and responsible manner.
  6. Does not take into account cultural differences: The framework does not take into account cultural differences that may influence the perception and use of AI technologies.
  7. Lacks systematic evaluation: The framework does not provide a systematic evaluation of the ethical impact of AI technologies, which can lead to problems in its practical application.

Robustness

AI systems, are subject to very serious problems such as blatant copyright infringement and the generation of compromising, sensitive, and utilitarian content. Hardcoded rules are useless at the moment, therefore, the issue of robustness raises questions about responsibility and transparency: who will be held responsible for the harmful actions of AI? How can we ensure that the training processes are transparent and open to scrutiny? How to monitor and evaluate the practical use of AI in different and changing contexts?

Having a robust AI means having a reliable system, one on which we can truly rely for important decisions. The reliability of an intelligent system is a necessary condition for establishing a relationship of trust with it. As the philosopher Luciano Floridi argued, trust is a fundamental concept in the information age and artificial intelligence era, as we establish increasingly close relationships with non-human agents. A non-robust and unreliable AI undermines this relationship of trust at its core, making synergistic collaboration between humans and machines impossible.

Furthermore, a robust and reliable AI system greatly reduces the risks associated with automation, allowing us to fully reap the benefits of AI without excessive concerns. The philosopher Nick Bostrom has highlighted how the potential existential risks linked to an advanced but unreliable artificial superintelligence are anything but science fiction scenarios. Developing robust AIs from now on is an ethical imperative to ensure a safe future.

AI-powered automation already makes a considerable number of critical decisions today, and we are moving towards a future increasingly decided by AI, yet risk perception is still not sufficiently developed and is not homogeneous worldwide. A robust and reliable AI is the key to confidently managing this transition towards an increasingly automated and intelligent world.

Future or Dystopia

Some of the risks we have discussed seem to come from dystopian narratives of a distant future. It therefore becomes crucial to distinguish realistic threats from mere speculation. A balanced approach requires careful analysis of the available evidence, an understanding of current trends, and a weighed assessment of possible consequences. Only then can we separate tangible risks from dystopian narratives and focus on the most pressing ethical issues.

However, carefully assessing risks also requires considering extreme scenarios that are not currently technologically feasible. Science fiction has often anticipated technological developments and scenarios that have eventually materialized. Ensuring the future of AI is therefore a matter of balance: we must imagine every possible scenario and bring imagination back to possible factual realities through a scientific assessment of the facts. Distinguishing dystopia and possible futures by pushing free creative thinking to its limits is the superpower humanity needs right now.


You'll only receive email when they publish something new.

More from GSLF
All posts