Research by the Oxford University Institute for the Future of Humanity, in which the views of experts for machine learning have been presented, has established that „stronger” AI, by 2031, could have a significant effect on the roles of professional truck drivers, retail workers, translators, journalists, tourist guides, librarians, all of which could go into oblivion with this tempo of AI development. However, we believe that many decades will pass before the „stronger” AI becomes advanced enough to replace humans in much more complex tasks. When (and if) that happens, it will certainly open various ethical questions for which there are no answers in present time.
What the US did?
USA have recently adopted Executive Regulation 13859 which contains 10 principles regarding AI, and which has been passed to chiefs of executive departments and federal agencies by Memorandum. The goal is to reduce the barriers which serve the development of AI and to encourage innovations and development in general.
However, taking into account the dangers of this phenomenon, only development in the field of AI is encouraged, which refers to the collection and processing of data from various sources of information and the performance of specific domain tasks (weak AI). Stronger AI, which implies mimicking human cognitive abilities as well as establishing state of consciousness that can perfect itself even over human limits is excluded by this Memorandum.
These principles should guide the agencies when deciding on applications which refer to design, development, distribution and work of the AI.
The principles are:
- Public trust in AI – since AI can have positive effect on social and economic life, but also taking into account the privacy risks, agencies should consider only applications that will strengthen the public trust in AI;
- Public participation – Agencies should, within legal limits, allow the public to participate in all phases of the process that precedes decision making and provide relevant information about other information documents in order to secure the transparency principle;
- Scientific integrity and quality of information – Agencies must take care of scientist integrity and enable that AI applications produce predictable, reliable and optimized outcomes, and that can be achieved only if data which serve for training the AI system are of sufficient quality for intended use;
- Risk assessment and management – not every risk has to be prevented, it is normal da flaws also exist, but it must be assessed which risks are acceptable, and which are not because they can cause huge damage and cause more expenses than benefits;
- Benefits and expenses – the benefits of economic, social nature which affect public health and other key areas must be especially appreciated, but they must be assess in correlation with expenses which implementation of a certain AI system causes in existing areas, as well as costs of non-implementation of such system;
- Flexibility – flexible approach which should follow rapid and dynamic development of AI is necessary. For the success of such approach values, such as public health, safety, privacy etc., must be taken into consideration;
- Fair approach and non-discrimination – while examining the application agencies must assess whether certain AI application reduces discrimination produced by human subjectivity or, in some cases, lead to discriminatory outcomes or decisions which diminish public trust in AI;
- Transparency and disclosure – agencies can sometimes reveal how can AI affect human end users for the purpose of strengthening public trust in AI, however they must, previously, take care of sufficiency of existing or evolving legal, political and regulatory environment;
- Safety and security – agencies should promote the development of AI systems which are safe, secure and work as intended, especially strengthening the resistance to cyber-attacks and prevention of use and abuse of AI system weaknesses;
- Coordination between agencies – agencies should cooperate with each other and exchange experiences in order to enable predictability and consistency in their decisions, all in order to contribute to global AI policy.
Should Serbia adopt similar principles?
If our country plans on being a leader in IT industry, it should certainly and urgently head that way both regulatory and in planned fashion, just like it needs promptly to bring and adopt the strategy for development of blockchain technology, considering that Germany for example adopted such a strategy by the end of last year.