Nov 27 (Nikkei) - Japan will make companies responsible for explaining decisions made by artificial intelligence software they use, according to a government draft of legal guidelines shared with Nikkei.
The list stipulates AI should not infringe on basic human rights. Personal information should be handled carefully, it continues, and AI's security must be guaranteed. It also calls for maintaining a fair competitive playing field, making AI more accessible by improving education, and building an environment that encourages cross-border data sharing.
A top goal is to increase transparency around how AI makes decisions, such as whether to extend a loan or hire someone for a job. A lack of clarity in AI's decision-making standards can leave the person being evaluated dissatisfied or uneasy.
There are also fears that AI could factor gender or ethnicity into a decision on whether to hire someone, for instance, without the knowledge of even the company employing it. Assigning people with the ultimate responsibility for clearly explaining decisions is expected to ease fears surrounding the use of the technology.
The seven guidelines will be officially unveiled next month by a government council on forming principles for a "human-centric" AI society, chaired by University of Tokyo Professor Osamu Sudo. Japan will call on Group of 20 members to adopt the rules at June summit meetings in Osaka.