Your browser does not support the JavaScript function. If some webpage cannot be used normally, please turn on the JavaScript status of the browser.
Loading..
Go To Content

New Southbound S&T Cooperation Website logo New Southbound S&T Cooperation Website logo New Southbound S&T Cooperation Website Logo

:::

Artificial Intelligence (1)

Date: 2023-09-12

 Chapter: |  Overview  |  Benefits & Application  |  Ethics & Regulations  |  International Organizations  | 

 

Overview

 

“AI is a field dedicated to building systems that mimic human intelligence, enabling computers to think and learn like humans. AI is everywhere, from virtual assistants on smartphones to self-driving cars. It allows machines to understand and respond to our commands, predict future events based on data, and even perform tasks like surgery or explore distant planets.”
- (ChatGPT)

“One of the key technologies of AI is machine learning, which enables computers to learn from data and make decisions. For example, a computer can learn how to recognize cats by observing many cat images. Over time, the computer becomes increasingly skilled at cat recognition. AI is also closely related to robotics, giving rise to intelligent machines that can clean homes or assist in complex surgeries. As AI continues to evolve, it will create machines that can think and learn like humans, making our lives more convenient, efficient, and safe.”
- (Google Bard)

 

The recent advances in "generative AI" chatbots, a field of AI that has garnered significant attention for some time, have shocked professionals across various sectors. One after another, tech giants have integrated AI into a wide range of tools, products, and services. AI chatbots have reached beyond what was traditionally possible by enhancing search results and providing suggestions through search engine integration. They can also work with productivity tools to boost efficiency, inspiring the private sector to create innovative products and services. For instance, AI can provide text-to-image and generative fill technology when applied to photo editing software. Thanks to AI, anyone who can use a computer now has the potential to become the next Picasso.

生程式聊天機器人(圖片來源:Emiliano Vittoriosi on Unsplash)

AI Chatbot (credit: Emiliano Vittoriosi on Unsplash)

 

 

Benefits & Application

When compared with humans, AI is more adept at performing repetitive and standardized tasks such as routine checks, responses, and record-keeping. Moreover, AI can process vast amounts of information, search for more relevant and precise data, and work, anywhere, anytime, without needing to rest. In addition, AI is highly adaptable and can quickly generate various distinct personalities and versions of visual and textual outputs. Thus, AI that has been through specialized training can accelerate productivity and development in specific fields.

According to Google Cloud, the six major benefits of AI are: (1) automation, (2) reducing human errors, (3) eliminating repetitive tasks, (4) fast and precise, (5) limitless availability, and (6) accelerated research and development. Thanks to these benefits, AI has the potential to play a critical role across various fields. Currently, the most widely applied function of AI is its fast and precise "recognition and analysis". For instance, in aquaculture, underwater cameras provide real-time monitoring of fish growth and smart feeding. In agriculture, drones equipped with AI recognition and analysis can widely expand visibility, which has helped monitor crop health and precisely apply pesticides. In healthcare, digital pathology AI analysis models help analyze and interpret pathological images, detecting areas of concern and assisting doctors in objective analysis for diagnosis and early prevention of potential health issues. Recognition and analysis technology alone has reduced costs in various aspects of healthcare while increasing survival rates.

AI also contributes to "cybersecurity" by autonomously scanning networks for cyber-attacks and threats. For example, Google is developing new AI models to address challenges such as "threat overload," "cumbersome tools," and "cybersecurity talent gaps." For instance, cybersecurity professionals now use AI to synthesize huge amounts of information to identify threats in log events and network flow data as well as provide clients and institutions with suggestions on improving data security and upholding compliance. In the future, AI will continue to combat the growing threat of cyberattacks by automating defence responses.

However, using AI to automate tasks may also exacerbate employment inequality by increasing the importance of "non-repetitive" work and "creative" thinking. While promoting the evolution and innovation of AI, policymakers face the challenge of ensuring the protection of employees and consumers from various emerging threats. The challenge for policymakers is how to foster progress and innovation in AI while shielding workers and consumers from types of harm that could potentially arise.

 

 

Ethics & Regulations

AI is not without its drawbacks. As AI advances and its applications span various domains, associated issues have arisen such as privacy breaches, algorithmic biases, discrimination, misjudgments, fraud, and a plethora of forgery cases. These are now matters of critical concern facing the international community.

For example, in 2023, an Italian regulatory agency banned a commercial entertainment AI chatbot from continuing to collect data. This was due to complaints that the AI chatbot's conversations were suspected of involving sexual harassment and violating the European Union's General Data Protection Regulation (GDPR) regarding the use of personal user information. Additionally, in April of the same year, the Italian government went further and banned ChatGPT, becoming the first in the world to do so, which prompted other countries' governments to contemplate adopting stricter regulatory measures for AI.

Subsequently, in June, the European Union passed the Artificial Intelligence Act which established risk categories and defined specific regulations that must be followed before AI is put into practical use, with a strong emphasis on user protection. In addition, further enhanced regulation is required to prevent ethical and error issues in critical areas concerning the safety of human life and property such as healthcare, transportation, and financial services. Because generative AI can help improve efficiency in the public sector, the Executive Yuan of Taiwan passed the "Administrative Guidelines for the Use of Generative AI by the Executive Yuan and its Affiliated Agencies" in August. These guidelines mandated that AI should be used in a responsible and trustworthy manner while performing official duties. Those working under this regulation must also take confidentiality and professionalism into consideration and never use AI on highly classified documents. Moreover, principles such as security, privacy, data governance, and accountability should be upheld to maintain autonomy and control.

In addition, National Science and Technology Council (NSTC) Minister Tsung-Tsong Wu announced that the NSTC will bring together research and academic resources from across Taiwan while collaborating with the private sector to create the domestically developed Trustworthy AI Dialogue Engine (TAIDE). Adhering to principles of scientific ethics and social responsibility under-regulated supervision, TAIDE aims to ensure fairness, transparency, and reliability so that AI models can be used reasonably and in compliance with relevant regulations.

Furthermore, Minister of Digital Affairs (MODA) Audrey Tang pointed out that AI's ability to mimic the faces and voices of others will not only result in financial losses and damage interpersonal trust but if harnessed by authoritarian regimes and malicious individuals, could pose a significant disruption to democratic systems. Additionally, the most advanced generative AI models are often trained internally by a single company and then provided for global use. However, different cultures and values in different regions often lead to significant differences in the interpretation of things. Models from a single source not only tend to be biased towards specific groups but also struggle to adapt to the diverse nature of the world.

To tackle significant issues posed by AI, MODA will collaborate with the "Technology, Society, and Democracy Center" think tank of the National Science and Technology Council to establish relevant regulations and education and training mechanisms. This will assist the government in enhancing administrative efficiency through the use of AI while maintaining a responsible and trustworthy attitude, preserving independent thinking and creativity, and promoting mutual trust between the people and the government.

 

 

International Organizations

To address the rapid development of AI, countries around the world are incorporating issues related to AI, such as privacy, confidentiality, information security, and national security, into their policy considerations. Not only do the public and private sectors have their own perspectives and regulations on AI use, but international and regional organizations are assessing and training AI's current iterations and future roles by taking various factors into account such as differences in culture and location, political systems, values, markets, and partners. These efforts aim to provide both the public and private sectors with more forward-thinking and comprehensive references to guide AI.

 

European Union (EU)

The European Union (EU) has progressed from advocating self-regulation guidelines to the establishment of market regulation standards. These enhanced standards aim to ensure that AI products and services within the EU adhere to the region’s values. In June 2023, the European Parliament deliberated and passed the "EU Artificial Intelligence Act" which became the world's first AI legislation. Its primary mission is to ensure that AI systems in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The systems should also be "human-supervised" rather than autonomously managed to avoid harmful consequences. The act also establishes different regulations for various risk levels, namely "Unacceptable Risk," "High Risk," and "Limited Risk."

 

Organization for Economic Cooperation and Development (OECD)

(Australia and New Zealand are both OECD and New Southbound Policy countries)*

The Organization for Economic Cooperation and Development (OECD) consists of 38 member countries and primarily focuses on research and analysis. It emphasizes respecting market mechanisms, reducing government intervention, and achieving economic cooperation and development among transnational governments through policy dialogues. Regarding AI issues, the OECD published the "Recommendation of the Council on Artificial Intelligence" in 2019 and "What are the OECD Principles on AI?" in 2020. These documents present five "fundamental principles for AI," including:

  1. Inclusive growth, sustainable development, and well-being
  2. Human-centered values and fairness
  3. Transparency and explainability
  4. Robustness, security and safety
  5. Accountability

 

Within the OECD, numerous specialized committees, working groups, and expert panels discuss various important topics across areas such as technology, economics, trade, education, and employment.

 

Global Partnership on Artificial Intelligence (GPAI)

(Australia, India, New Zealand and Singapore are both GPAI and New Southbound Policy Partner countries)*

Built upon the foundation of the OECD, the "Global Partnership on Artificial Intelligence" (GPAI) consists of 29 international partners (28 countries + the European Union), bringing together experts from various fields, including science, industry, government, civil society, and international organizations. GPAI is committed to values that center around human-centric AI development, such as human rights, diversity and inclusion, innovation, and economic progress, and it aims to align with the United Nations' 17 Sustainable Development Goals (SDGs). Underpinning these shared values, GPAI invests in cutting-edge research and applications in the field of AI to bridge the gap between theory and practice.

In its initial stages, GPAI collaborates with experts and four working groups in artificial intelligence: (1) Responsible AI, (2) Data Governance, (3) Future of Work, and (4) Innovation & Commercialization.
 

The Quadrilateral Security Dialogue (Quad)

(Australia and India, are both Quad and New Southbound Policy countries)*

The Quad is a cooperative framework involving the US, Japan, India, and Australia. The "Quad spirit" aims to promote unity and prosperity in the Indo-Pacific region and establish a foundation based on democratic universal values of freedom, openness, and inclusivity. Serving as a bridge for the US in the Indo-Pacific region, it fosters cooperation in technology and trade while addressing security challenges. Artificial intelligence is one of the Quad’s key focus areas. The Quad aims to promote and expand cooperation in science and technology by utilizing each member country’s respective strengths and by bringing together industry, government, and academia.

Over the past five years, Quad partners have released national artificial intelligence strategies. The agenda of the September 2021 "Quad Leaders' Summit," highlighted the establishment of "Technical Standards Contact Groups" that focus on standard-setting and standardized research in advanced ICT technology and artificial intelligence.

In line with the "Quad spirit," the Quad emphasized in a joint statement that AI “technology should not be misused or abused for malicious activities such as authoritarian surveillance and oppression." The Quad believes that responsible AI development is crucial and that international technological collaboration with like-minded democratic countries is essential. The AI cooperation led by the Quad is committed to promoting an open and secure technology ecosystem. This cooperation has contributed to countering the disruptive actions of authoritarian governments in the Indo-Pacific region, especially when maliciously employing AI for surveillance, censorship, and the dissemination of misinformation.

The following article will explain the developmental overview and individual policies of specific New Southbound partner countries in AI.
 

 

Reference

首頁  /  Policy Analysis of Partner Countries