1. Overview of AI in Vietnam
1.1. Policy to support artificial intelligence from the government

On December 5, the Government of Vietnam and Nvidia just signed an agreement to open the Artificial Intelligence Research and Development Center (VRDC) and the AI Data Center to promote the application of artificial intelligence. The two centers are considered a foundation to help Nvidia and domestic partners deploy advanced smart systems, opening up opportunities to develop a billion-dollar technology industry – AI technology in the coming years.
- National strategy on AI:
Vietnam has issued the National Strategy on research, development and application of digital intelligence until 2030 to make AI a key technology field in the 4.0 industrial revolution. From there, it contributes to creating great strides in production capacity, enhancing national competitiveness and promoting sustainable economic development.
The strategy sets the goal of making Vietnam one of the five leading countries in ASEAN and in the group of 60 leading countries in the world in research, development and application of artificial intelligence. At the same time, it aims to build five reputable AI brands in the region and establish a national center specializing in large data storage and high-performance computing.
In addition, it is necessary to focus on research and development of artificial intelligence products and services in which Vietnam has high competitive potential. Prioritize investment in smart digital technology applications in key areas such as national defense, security, resource management, environment, community services and strongly promote the development of AI application businesses as well as AI startups.
The strategy also defines a vision that by 2030, AI will be widely applied in the digital economy, digital government, digital society, and intelligentization of socio-economic activities. Along with that, form a force of leaders and workers with thinking and skills in using AI to solve practical problems.
- Financial support and business incentive policies:
In order to promote the process of research, development and application of AI in Vietnam, the Government has issued many policies, financial support and incentives for businesses. In particular, the government’s National Innovation Fund was established with the goal of supporting and promoting research, development and technology application activities. At the same time, promote domestic innovation through sponsorship activities, investment in research projects, and providing resources for startups and businesses.
In addition, Make in Vietnam programs are also an important initiative to promote Vietnamese businesses to research, create, develop and produce technology products domestically. Make in Vietnam encourages businesses to master technology, from research and design to production and commercialization instead of just participating in assembly and processing as before.

Thereby, creating regional and global Vietnamese technology brands, actively contributing to the digital transformation process in economic, social and government sectors. Especially reducing dependence on foreign technology, promoting the sustainable development of the digital technology industry.
1.2. Applying digital intelligence in Vietnamese businesses
- Artificial intelligence in manufacturing and supply chains
Many corporations and large manufacturing companies in Vietnam such as VinFast and Viettel have applied AI in industry to optimize production processes and predict demand. VinFast’s smart factories possess production systems that integrate sensors and network connections to effectively collect and analyze data. Thereby, creating a synchronous connection between devices and the entire production line helps businesses easily monitor the entire supply chain, from design, production until the product reaches the customer.
Meanwhile, Viettel Post is the leading unit in researching, developing and mastering smart logistics technology in Vietnam, with advanced solutions such as automatic robot systems (Automatic Sorting Robot (AGV Sorting), Autonomous Transport Robot (AGV Picking), Automatic Sorting Robot (ARM)), smart warehouse and transportation management. These are solutions for checking, managing, classifying and transporting goods automatically with accurate and fast processing speed without human support.
- Smart digital technology in financial and banking services
In the field of finance and banking, Vietcombank and VPBank are typical names that use AI to analyze customer data, detect financial fraud and improve customer experience. Specifically, Vietcombank uses virtual assistant VCB Digibot to take care of customers. VCB Digibot supports 24/7 instant and almost accurate response to users’ frequently asked questions such as cards, interest rates, loans, incentive information, exchange rates…
VPBank uses artificial intelligence in the fields of personal credit, foreign currency transactions and digital banking… through VPBank NEO apps, VPBank NEO Express bank kiosks,… Chatbots support customers to help reduce waiting time. In addition, VPBank also developed the VPDirect system to apply artificial intelligence to monitor transactions, detect fraud and risks to improve banking system security.

- Digital intelligence in medicine and health care
Large hospitals such as Cho Ray Hospital, Hue Central Hospital, Bach Mai Hospital… have used AI in medicine to analyze medical images and diagnose diseases. Thereby, helping doctors detect early signs of cardiovascular disease, cancer and neurological diseases through data analysis from X-ray images, MRI and CT scans.
In addition, smart systems are also applied to support remote patient treatment, manage electronic medical records and optimize patient care processes. AI has the ability to predict the need for hospital beds, coordinate medical examination and treatment schedules and manage medications… to help hospitals operate more efficiently.
1.3. AI startups in Vietnam
1.3.1 FPT.AI
FPT.AI is a technology platform of FPT Group, focusing on artificial intelligence and automation solutions. FPT.AI platform provides key solutions including:
- FPT AI Chat
- FPT AI Engage – platform for building and managing automatic conversations and applying natural language processing technology
- FPT AI Enhance – solution to support call center quality improvement.
- FPT AI Read – tool to extract data from text and digitize documents
- FPT AI eKYC – electronic customer identification tool, applying the most advanced Computer Vision technologies including Optical Character Recognition (OCR), Liveness Detection, and Face Recognition.
To date, FPT.AI has deployed more than 3,125 chatbots, 3,200 Virtual Assistants, serving more than 200 million interactions between businesses and customers each month, helping to reduce costs by up to 40% and increase productivity by up to 67% for businesses.
1.3.2 Viettel AI
Viettel AI is a unit focusing on research, development and application of artificial intelligence technologies in technology products and services. Viettel AI focuses on exploiting areas such as: Vietnamese natural language processing technology; big data (Big Data); Computer vision technology (Computer Vision)…
Some artificial intelligence application solutions of Viettel AI:
- Cyberbot: automatic customer care support with Chatbot and Callbot.
- REPUTA: provides online data monitoring and analysis solutions, supporting businesses and organizations in effectively managing brand image and interacting with customers.
- Voice Note: a smart Vietnamese voice conversation note application to help customers (journalists, students, content creators, etc.) easily convert audio content to text.

1.3.3 Wine AI
VinAI, a subsidiary of Vingroup, focuses on research and development of artificial intelligence (AI) applications to solve practical problems and bring practical values to life. VinAI products are widely applied in many fields, from transportation, security to health care.
VinAI’s main product lines:
- Smart Mobility: product line for smart cars including driver monitoring system (DMS), 360-degree panoramic view system, MirrorSense – automatic rearview mirror adjustment technology
- Smart Data: solution for developing artificial intelligence technology.
- Smart Edge: products for image data analytics and natural language processing solutions.
1.3.4 Vbee
Vbee is a leading technology company in Vietnam, specializing in providing artificial intelligence (AI) application solutions in the field of Vietnamese language and speech processing. With the goal of creating intelligent voice technology and optimizing user experience, Vbee AI has developed many products and services to serve businesses and users.
Outstanding products of Vbee AI:
- Vbee AIVoice: Vbee’s flagship product with natural human-like artificial voice product lines. Notable products include voice cloning products (Vbee Voice Cloning), AI dubbing (Vbee AI Dubbing), Vbee AIVoice API, etc. Vbee Text to Speech supports text-to-speech conversion in more than 50 different languages, diverse in gender and region. Users can customize the voice in many different styles and can apply it to many fields such as content production, e-learning, audiobooks, chatbots,…
- Vbee AICall: a “virtual”assistant that replaces the operator to help answer questions related to products and services, conduct surveys to get customer opinions, announce promotions, customer care on birthdays or major holidays…

1.3.5 Zalo AI
Zalo AI is a collection of artificial intelligence (AI) applications developed by VNG Corporation, to bring more convenient and smarter experiences to Zalo users. Zalo AI uses computer vision technology, voice processing, natural language processing and data mining to develop products such as:
- Zalo AI Avatar: create a personal avatar with just a selfie.
- Zalo AI Chat: chat with smart chatbot, support answering questions, ordering, shopping,…
- Zalo AI Translate: translate multilingual text and voice quickly and accurately.
- Zalo AI Filter: edit photos and videos with unique AI filters.
- Zalo AI Music: search and recommend music according to personal preferences based on the user’s music listening habits.
- Zalo AI Game: play entertaining mini games right on Zalo with AI integration.
2. Ethical issues when using and developing AI
2.1 Privacy and data security
For an AI platform to operate effectively, processing and analyzing large amounts of data (Big Data) is a key factor. Big Data provides a rich source of information, helping AI learn, predict and make more accurate decisions. However, this poses a big challenge in terms of user privacy and data security. Balancing data mining and privacy protection will be a key factor in sustainable AI development.
2.2. Privacy issues in AI
- Collect user data
Intelligent systems such as chatbots, virtual assistants or mobile applications often collect user data to be able to provide personalized services, products or content suitable for each person. However, excessive data collection can violate user privacy. Besides, data is collected without user consent or misused, which may violate law and ethics.

- Risk of personal data leakage
Protecting data from unauthorized access and cyber attacks is very important. In fact, after being collected, there is always a risk of leaking user personal data, which can cause serious consequences for users. When your personal information is exposed, bad guys can take advantage of it to commit many illegal acts, affecting your life and property.
2.3 Safety and security
As artificial intelligence is integrated into many different areas of life, the potential risk of accidents or misuse will increase, which is one of the leading challenges when using AI today. AI models can make wrong decisions due to incomplete, biased training data or errors during algorithm design and training.
Although modern and intelligent, AI still has security weaknesses. Systems can be vulnerable to cyberattacks through algorithmic vulnerabilities, leading to the risk of data breaches and security threats. For example, stealing personal information, sensitive information, creating sophisticated phishing attacks that are difficult to detect.

2.4 Labor and employment issues
Intelligent analytics systems can automate certain tasks. Typically, automatic assembly, quality inspection and packaging, customer service… are faster and more accurate than humans. Thereby, helping to increase productivity and production efficiency. However, the consequence of this process is that many jobs will disappear or change significantly, leading to unemployment and the need for new labor skills.
2.5 Social impact
- The relationship between humans and machines
The development of digital intelligence can change the way we interact with each other and the world around us. AI not only replaces some repetitive tasks, but also changes the way we communicate, from using virtual assistants to intelligent chatbots. This technology enhances social connection, but also has the potential to reduce face-to-face human interaction.

- Human value
The rise of automation threatens core human values such as creativity and compassion. AI, with its ability to automate decisions and actions, will lack the understanding that only humans can provide in situations where empathy is needed.
In addition, as artificial thinking can increasingly create creative products such as art, music or literature, humans gradually lose their natural creative role. If not properly regulated, AI can destroy human values, replace human emotions and intelligence with mechanical solutions, and reduce richness and diversity in society.
2.6 Fairness and transparency
Artificial intelligence is designed to serve everyone, regardless of background or circumstances. Ensuring that the intelligent model does not discriminate based on any factors such as gender, race, religion or geography is extremely important. A fair AI will create a fairer society where everyone has the opportunity to thrive and succeed.
Besides the factor of fairness, the requirement for transparency in AI decisions is becoming increasingly urgent, especially as digital intelligence is increasingly widely applied in many important fields such as healthcare, finance, and justice. When users understand why the intelligent analytics system makes the decisions it does, they will have more trust in the system and be willing to use it. At the same time, it helps developers easily detect and fix errors in the algorithm.
To ensure that AI decisions are reliable, a number of measures can be taken such as developing an explanatory AI model (Explainable AI – XAI), storing information related to the AI decision-making process so that it can be retrieved and rechecked when needed. Or perform regular audits to ensure that AI systems operate properly and without any errors.
3. Bias in algorithms
3.1. The origins of bias in AI
- The data is not diverse
If the input data comes primarily from a particular group of people (for example, only Americans, white people, or highly educated people), the intelligent processor will learn from these data characteristics and patterns. This results in AI understanding and serving only these groups of people, ignoring the needs and characteristics of other groups, such as women, ethnic minorities, or underrepresented communities.

- The algorithm is not fair
If an AI algorithm is poorly designed, the model can unintentionally create bias, leading to decisions that disadvantage certain groups of people. When the data used to train an AI model contains social biases, the model will learn and reproduce those biases. For example, if a hiring algorithm is trained on primarily male data, it may favor male candidates.
3.2. How to overcome bias in AI
- Ensure diversity of training data
Ensuring diversity in training data is an important requirement in building fair and effective AI models. Using data from a variety of sources increases representativeness, minimizes bias, and ensures that the model performs well across multiple populations. This is especially important in applications such as healthcare, recruitment or legal systems.
- AI auditing and monitoring
AI auditing and monitoring play an important role in ensuring intelligent systems make fair and transparent decisions. Organizations need to perform periodic audits to detect and promptly correct problems such as bias or unfairness in the decision-making process.
Applying audit tools and methods not only helps prevent decisions that harm specific individuals or groups, but also ensures system compliance with ethical and legal regulations. This not only strengthens user trust but also contributes to the sustainable development of AI.

4. Legal responsibilities when using AI
4.1. Responsibility of businesses developing AI
- Ensure fairness and transparency
Companies developing AI need to be responsible for ensuring the accuracy and fairness of their models. This requires investing in careful model design, training and testing, while minimizing the risks associated with bias and unfairness. Additionally, companies need to be transparent about how AI works, provide clear explanations to users, and comply with ethical standards and legal regulations.
- Ensure legal compliance
Businesses need to strictly comply with regulations on privacy, cybersecurity and personal data protection. This includes collecting, storing and processing personal data in a transparent manner, using data only for authorized purposes and applying advanced security measures to prevent data leaks or compromises.
In addition, businesses need to comply with current laws, such as GDPR (General Data Protection Regulation), PCI DSS (for payment businesses), HIPAA (for medical businesses), etc.
- Continuous monitoring
Companies must establish monitoring and auditing systems that can evaluate and ensure ethical compliance of AI systems. The monitoring process should include evaluating the technical performance of algorithms, transparency, and the social impact of artificial intelligence to detect potential problems early.

4.2. Responsibilities of AI users
AI users need to be conscious and responsible when using smart technology. Ensure compliance with ethical and legal principles, respect privacy, protect personal data, and avoid violations, such as using other people’s data without permission or exploiting AI for malicious purposes.
4.3. Responsibilities of government and regulatory agencies
- Enact laws and regulations
Governments need to have clear legal regulations on data protection, privacy and liability related to the development and deployment of AI. These regulations include standards for data collection, storage and processing, ensuring that personal data is protected and not misused. At the same time, it is necessary to clearly define the legal responsibilities of organizations or individuals in case AI causes negative impacts.
AI monitoring and control
The government needs to play a role in closely monitoring technology companies, ensuring that they fully comply with privacy and security regulations. When breaches occur, companies are held liable and face severe penalties, to prevent abuse or misuse of data. At the same time, the government needs to establish specialized agencies and effective monitoring tools to periodically check, detect violations early and protect users’ rights.

5. Some frequently asked questions about ethics and responsibility when using AI
5.1 How can AI violate user privacy?
AI can violate privacy through many ways:
- Excessive data collection: AI systems such as chatbots and virtual assistants often collect user data to personalize services, but excessive collection can violate privacy rights.
- Data collection without consent: Data collected without user consent or misused will violate the law and ethics.
- Risk of data leakage: After being collected, there is always a risk of leakage of personal data, which can be used by bad actors to commit illegal acts, affecting the life and property of users.
To protect privacy, organizations need to comply with regulations such as GDPR, PCI DSS, HIPAA and adopt advanced security measures.
5.2 Where do biases in AI algorithms come from and how to overcome them?
Bias in AI has two main sources:
- Source:
- Data is not diverse: If the input data is primarily from a specific group (e.g., only white people, men), the AI will only understand and serve this group, ignoring other groups.
- Unfair Algorithms: Poorly designed algorithms can create bias, leading to decisions that disadvantage certain groups of people.
- How to fix:
- Ensure data diversity: Use data from many different sources to increase representativeness and minimize bias.
- AI audit and monitoring: Perform periodic audits to promptly detect and correct issues of bias or unfairness in the decision-making process.
5.3 Can AI completely replace humans at work?
AI cannot completely replace humans, but will create big changes:
- Positive impact:
- AI can automate tasks such as assembly, quality testing, and customer service faster and more accurately than humans.
- Helps increase productivity and production efficiency.
- Challenge:
- Many jobs will disappear or change significantly, leading to unemployment.
- New labor skills requirements.
- Values that cannot be replaced:
- AI lacks the understanding and empathy that only humans have.
- Core values such as creativity and compassion are still human strengths.
- Humans are still needed in situations that require empathy and complex decisions.
5.4 Who is liable when AI causes damage?
Liability when using AI is divided into three main groups:
- Enterprises developing AI:
- Ensure fairness and transparency of AI models.
- Comply with regulations on privacy, cybersecurity and personal data protection.
- Continuous monitoring and auditing to evaluate AI performance.
- AI users:
- Be conscious and responsible when using AI technology.
- Comply with ethical and legal principles.
- Respect privacy and avoid using AI for malicious purposes.
- Government and regulatory agencies:
- Enact clear laws and regulations on legal responsibilities.
- Monitor and control technology companies.
- Establish specialized agencies to inspect and handle violations.
5.5 How to ensure AI operates transparently and fairly?
To ensure AI operates transparently and fairly, the following measures should be taken:
- Ensuring transparency:
- Develop an explanatory AI model (Explainable AI – XAI) so that users understand how AI makes decisions.
- Store information relevant to the decision-making process so it can be retrieved and audited.
- Provide clear explanations to users about how AI works.
- Ensuring fairness:
- AI must be designed to serve everyone, without discrimination based on gender, race, religion or geography.
- Use diverse and representative data to train the model.
- Perform regular testing to ensure the system operates properly and without errors.
- Monitoring and auditing:
- Perform periodic audits to detect and correct problems.
- Establish a continuous monitoring system to evaluate performance and social impact.
- Comply with ethical and legal standards throughout the development and deployment of AI.
Ethics and responsibility when using AI are key to ensuring that this technology is sustainable and beneficial to society. By adhering to ethical and responsible principles, we can exploit the potential of AI optimally, while minimizing possible negative impacts.