The path and enlightenment of EU artificial intelligence ethics and governance

The path and enlightenment of EU artificial intelligence ethics and governance

Author: Cao Jianfeng, Senior Researcher at Tencent Research Institute

Fang Lingman Assistant Researcher, Legal Research Center, Tencent Research Institute

The EU is actively promoting human-centered AI ethics and governance

The development and penetration of digital technology has accelerated the transformation of society and economy. Artificial intelligence (AI), as the core driving force, has created more possibilities for social development. Generally speaking, AI refers to algorithms or machines that achieve autonomous learning, decision-making and execution based on the input of certain information content. Its development is based on the improvement of computer processing power, algorithm improvement and exponential growth of data. From machine translation to image recognition to the synthesis of works of art, various applications of AI have begun to enter our daily lives. Today, AI technology is widely used in different industries (such as education, finance, construction and transportation, etc.), and is used to provide different services (such as autonomous driving, AI medical diagnosis, etc.), which is profoundly changing human society. At the same time, the development of AI has also posed challenges to law, ethics, society, etc., bringing about problems such as fake news, algorithmic bias, privacy infringement, data protection, and network security. Against this background, AI ethics and governance have received increasing attention. From government to industry to academia, there has been a wave of exploration and formulation of AI ethical guidelines around the world. The European Union has been actively exploring AI ethics and governance initiatives since 2015. Although it has not taken the initiative in the development of AI technology, it has been at the forefront of the world in AI governance.

As early as January 2015, the Legal Affairs Committee of the European Parliament (JURI) decided to set up a working group to study legal issues related to the development of robots and artificial intelligence. In May 2016, JURI released the Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics, calling on the European Commission to assess the impact of artificial intelligence, and in January 2017 formally put forward extensive recommendations on civil legislation for robots, proposing the formulation of a "Robotics Charter". [1] In May 2017, the European Economic and Social Committee (EESC) issued an opinion on AI, pointing out the opportunities and challenges that AI brings to 11 areas such as ethics, security, and privacy, and advocating the formulation of AI ethical norms and the establishment of a standard system for AI monitoring and certification. [2] In October of the same year, the European Council pointed out that the EU should have a sense of urgency in responding to new trends in artificial intelligence, ensuring a high level of data protection, digital rights and the formulation of relevant ethical standards, and invited the European Commission to propose methods to respond to new trends in artificial intelligence in early 2018. [3] In order to address the ethical issues raised by the development and application of artificial intelligence, the EU has established AI ethics and governance as a key focus of future legislative work.

On April 25, 2018, the European Commission released the policy document Artificial Intelligent for Europe, and the EU's artificial intelligence strategy was finally released. The strategy proposes a people-oriented AI development path, aiming to enhance the EU's scientific research level and industrial capabilities, respond to the technical, ethical, and legal challenges brought by artificial intelligence and robots, and enable artificial intelligence to better serve the development of European society and economy. The EU's artificial intelligence strategy includes three pillars: first, enhance technical and industrial capabilities and promote the widespread penetration of artificial intelligence technology into all walks of life; second, actively respond to social and economic changes, keep the education and training system up to date, closely monitor changes in the labor market, provide support for transitional workers, and cultivate diversified and interdisciplinary talents; third, establish an appropriate ethical and legal framework, clarify the applicability of product rules, and draft and formulate AI ethics guidelines. [4] In June of the same year, the European Commission appointed 52 representatives from academia, industry, and civil society to form the High-Level Expert Group on AI (AI HELP) to support the implementation of the European artificial intelligence strategy.

In January 2019, the Industry, Research and Energy Committee of the European Parliament issued a report calling on the European Parliament to formulate a comprehensive EU industrial policy for artificial intelligence and robotics, which covers cybersecurity, the legal framework, ethics, and governance of artificial intelligence and robotics. [5] In April 2019, the EU successively issued two important documents: the Ethics Guidelines for Trustworthy AI (the “Ethics Guidelines”) [6] and the Governance Framework for Algorithmic Accountability and Transparency (the “Governance Framework”) [7]. These documents are the specific implementation of the EU’s AI strategy’s requirement to “establish an appropriate ethical and legal framework”, providing a reference for the formulation of relevant rules in the future, and represent the EU’s latest efforts to promote AI governance.

Construction of an AI ethics framework: an ethical guide for trustworthy AI

In order to balance technological innovation and human rights protection, the construction of an AI ethics framework is essential. The ethics framework provides principle guidance and basic requirements for the design, development, production and use of AI to ensure that its operation complies with legal, safety and ethical standards. The Ethics Guidelines were drafted and issued by AI HELP and are not binding. The EU encourages all stakeholders to actively implement the Ethics Guidelines and promote the formation of international consensus on AI ethics standards. In general, in addition to formulating pan-EU ethical guidelines, the EU hopes that the ethical governance of AI can be guaranteed at different levels. For example, member states can establish AI ethics monitoring and supervision agencies, encourage companies to set up ethics committees and formulate ethical guidelines when developing AI to guide and constrain their AI developers and their R&D and application activities. This means that the ethical governance of AI cannot remain at the level of abstract principles, but needs to be integrated into practical activities of different subjects and at different levels to become a living mechanism.

According to the Ethical Guidelines, trustworthy AI must have but is not limited to three characteristics: (1) Legality, that is, trustworthy AI should respect people's basic rights and comply with existing laws; (2) Ethical compliance, that is, trustworthy AI should ensure compliance with ethical principles and values ​​and meet "ethical purposes"; (3) Robustness, that is, from the perspective of technology or social development, AI systems should be robust and reliable, because even if AI systems meet ethical purposes, if they lack the support of reliable technology, they may still inadvertently cause harm to humans. Specifically, the ethical framework of trustworthy AI includes the following three levels:

1. The foundation of trustworthy AI

Among the fundamental rights stipulated in international human rights law, the EU Charter and relevant treaties, the main requirements for the development of AI include: personal dignity, personal freedom, democracy, justice and law, equality, non-discrimination and solidarity, and the legitimate rights of citizens. Many public and private organizations draw inspiration from fundamental rights to develop ethical frameworks for artificial intelligence systems. For example, the European Group on the Ethics of Science and New Technologies (EGE) proposed nine basic principles based on the values ​​in the EU Charter and relevant regulations. Based on the reference to most of the existing principles, the Ethical Guidelines further summarize four ethical principles that meet the requirements of social development, and use them as the foundation of trustworthy AI to provide guidance for the development, deployment and use of AI.

These principles include: (1) Respect for human autonomy. Humans interacting with AI must have full and effective self-determination capabilities. AI systems should follow a people-oriented philosophy and be used to serve humans, enhance human cognition, and improve human skills. (2) Prevention of harm. AI systems cannot have a negative impact on humans. AI systems and their operating environments must be safe. AI technology must be robust and should be ensured not to be used maliciously. (3) Fairness. The development, deployment, and use of AI systems must adhere to both substantive fairness and procedural fairness. It should ensure that benefits and costs are distributed equally and that individuals and groups are free from discrimination and prejudice. In addition, individuals affected by decisions made by AI and its operators have the right to raise objections and seek relief. (4) Explainability. The functions and purposes of AI systems must be open and transparent. The AI ​​decision-making process needs to be explained to those directly or indirectly affected by the decision results to the extent possible.

2. Realization of Trustworthy AI

Under the guidance of AI ethical principles, the Ethical Guidelines propose seven key requirements that the development, deployment, and utilization of AI systems should meet. Specifically, the four ethical principles in the Ethical Guidelines, as top-level ethical values, will play the most basic guiding role in the research and development and application of trusted AI, but the seven key requirements are ethical requirements that can be implemented. This means that AI ethics is a governance process from macro-level top-level values ​​to meso-level ethical requirements to micro-level technical implementation.

1. Human agency and oversight

First, AI should help humans exercise their fundamental rights. When AI is likely to infringe upon fundamental rights due to the limitations of technical capabilities, a fundamental rights impact assessment should be completed before the development of the AI ​​system, and the possible impact of the AI ​​system on fundamental rights should be understood by establishing an external feedback mechanism. Secondly, AI should support individuals in making more informed decisions based on their goals, and individual autonomy should not be affected by AI automatic decision-making systems. Finally, establish appropriate supervision mechanisms, such as "human-in-the-loop" (i.e., human intervention in every decision-making cycle of the AI ​​system), "human-on-the-loop" (i.e., human intervention in the design cycle of the AI ​​system), and "human-in-command" (supervising the overall activities and impact of AI and deciding whether to use it).

2. Technical robustness and security

On the one hand, we need to ensure that AI systems are accurate, reliable, and can be repeated, improve the accuracy of AI system decisions, improve the evaluation mechanism, and timely reduce the unexpected risks caused by system error predictions. On the other hand, we need to strictly protect AI systems to prevent vulnerabilities and malicious attacks by hackers; develop and test security measures to minimize unexpected consequences and errors, and have executable backup plans when problems occur in the system.

3. Privacy and data governance

User privacy and data must be strictly protected throughout the life cycle of the AI ​​system to ensure that the collected information is not used illegally. While eliminating errors, inaccuracies, and biases in the data, the integrity of the data must be ensured, and the entire process of AI data processing must be recorded. Strengthen the management of data access agreements and strictly control the conditions for data access and flow.

4. Transparency

The traceability of AI decision-making data sets, processes, and results should be ensured, and AI decision results should be understandable and traceable to humans. When the decision results of the AI ​​system have a significant impact on individuals, the decision-making process of the AI ​​system should be appropriately and promptly explained. Improve users' overall understanding of the AI ​​system, let them understand the interaction between them and the AI ​​system, and truthfully inform them of the accuracy and limitations of the AI ​​system.

5. Diversity, non-discrimination and fairness

To prevent AI systems from causing bias and discrimination against vulnerable and marginalized groups, they should be user-centric and allow anyone to use AI products or receive services. Universal design principles and relevant accessibility standards should be followed to meet the needs of the widest range of users. At the same time, diversity should be promoted and relevant stakeholders should be allowed to participate in the entire life cycle of AI.

6. Social and Environmental Well-Being

AI systems should be encouraged to take on the responsibility of promoting sustainable development and protecting the ecological environment, and to use AI systems to study and solve global issues of concern. Ideally, AI systems should benefit the present and future generations. Therefore, the development, use and deployment of AI systems should fully consider their impact on the environment, society and even democratic politics.

7. Accountability

First, an accountability mechanism should be established to identify responsible entities throughout the development, deployment, and use of AI systems. Second, an audit mechanism for AI systems should be established to evaluate algorithms, data, and design processes. Third, the potential negative impact of AI systems on individuals should be identified, recorded, and minimized, and appropriate remedial measures should be taken in a timely manner when AI systems produce unfair results.

It is worth noting that different principles and requirements may have inherent tensions with each other because they involve different interests and values. Therefore, decision makers need to make trade-offs based on actual conditions while maintaining continuous records, evaluations, and communications of their choices. In addition, the Ethical Guidelines also propose some technical and non-technical methods to ensure that the development, deployment, and use of AI meet the above requirements, such as researching and developing explainable AI technology (Explainable AI, referred to as XAI), training monitoring models, building a legal framework for AI supervision, establishing and improving relevant industry norms, technical standards, and certification standards, and educating and enhancing public ethical awareness.

3. Evaluation of Trustworthy AI

On the basis of the aforementioned seven key requirements, the Ethical Guidelines also lists an evaluation checklist for trusted AI. The evaluation checklist is mainly applicable to AI systems that interact with humans, and aims to provide guidance for the specific implementation of the seven key requirements, helping different levels within a company or organization, such as management, legal department, R&D department, quality control department, HR, procurement, daily operations, etc., to jointly ensure the realization of trusted AI. The Ethical Guidelines point out that the evaluation items listed in the checklist are not always exhaustive. The construction of trusted AI requires continuous improvement of AI requirements and the search for new solutions to problems. All stakeholders should actively participate to ensure that AI systems operate safely, robustly, legally and ethically throughout their life cycle, and ultimately benefit mankind.

It can be seen that in the EU's view, AI ethics is a systematic project that requires the coupling of ethical norms and technical solutions. The construction of AI ethics in other countries and the international community may still remain at the stage of abstract value extraction and consensus building, but the EU has gone a step further and begun to explore the construction of a top-down AI ethics governance framework.

Policy recommendations for AI algorithm governance:

Algorithmic Accountability and Transparent Governance Framework
The Governance Framework is a systematic research report on algorithmic transparency and responsible governance released by the European Parliament's Future, Science and Technology Panel (STOA). Based on a series of real-life cases, the report explains the causes of unfair algorithms and their possible consequences, as well as the obstacles to achieving algorithmic fairness in specific contexts. On this basis, the report proposes to use algorithmic transparency and responsible governance as a tool to solve the problem of algorithmic fairness. Achieving algorithmic fairness is the purpose of algorithmic governance, while emphasizing the role and significance of the "Responsible Research and Innovation" (RRI) approach in promoting the realization of algorithmic fairness. The core of RRI is to achieve inclusive and responsible innovation with the participation of stakeholders.

Based on the analysis of the challenges that algorithmic systems bring to society, technology, and regulation, the report proposes systematic policy recommendations for the governance of algorithmic transparency and accountability. Starting from a high-level perspective of technology governance, the report discusses various types of governance options in detail, and finally reviews specific recommendations for algorithmic system governance in existing literature. Based on an extensive review and analysis of existing recommendations for algorithmic system governance, the report proposes policy recommendations at four different levels.

1. Improving the public’s algorithm literacy

The premise of achieving algorithmic accountability is algorithmic transparency, which does not mean that the public should be aware of the various technical features of the algorithm. The report points out that a broad understanding of the functions of algorithms has little effect on achieving algorithmic accountability, while the disclosure of brief, standardized information that may affect public decision-making or enhance the public's overall understanding of the algorithmic system is more effective. In addition, investigative news reports and revelations also play an important role in exposing the improper use of algorithms and achieving algorithmic transparency and accountability. For example, the New York Times once reported that Uber used certain algorithmic technologies to mark and evade city regulators (this news was revealed by a former Uber employee). This report immediately attracted widespread attention from the media and society, and regulatory authorities also took investigative action against the company. In addition to playing a supervisory role, news reports are committed to improving the public's understanding of algorithms in simple and easy-to-understand language. News investigations can also stimulate extensive social dialogue and debate and trigger new academic research. For example, a report by the non-profit organization ProPublica on "machine bias" in COMPAS, a criminal risk assessment algorithm system used by some US courts, triggered a series of studies on algorithmic fairness.

Based on this, the Governance Framework puts forward several policy recommendations for enhancing public awareness of algorithms: (1) Educate the public to understand the core concepts of algorithm selection and decision-making; (2) Standardize the mandatory disclosure content of algorithms; (3) Provide technical support for news reports on "algorithmic accountability"; (4) Allow whistleblowers to be exempted from liability if they violate the terms of service or infringe intellectual property rights for public interest reasons.

2. Public sectors establish algorithmic accountability mechanisms

Today, more and more public sectors are beginning to use algorithmic systems to improve office efficiency, support complex office processes, and assist in policy-making activities. If there are defects in the algorithm, it may have an immeasurable impact on vulnerable groups in society. Therefore, the public sector particularly needs to establish a sound algorithmic transparency and accountability mechanism. One of the governance mechanisms that can be considered is to draw on the data protection impact assessment (DPIA) mechanism in data protection law to establish an algorithmic impact assessment (AIA) mechanism. This mechanism allows policymakers to understand the usage scenarios of algorithmic systems, evaluate the intended use of algorithms and make relevant recommendations, and help establish an algorithmic accountability mechanism. According to the Governance Framework, the AIA process mainly includes: publishing the public sector's definition of "algorithmic system", publicly disclosing the purpose, scope, intended use, and related policies or practices of the algorithm, executing and publishing self-assessments of algorithmic systems, public participation, publishing algorithmic assessment results, and regularly updating AIA.

3. Improve the supervision mechanism and legal liability system

On the one hand, the EU has made relatively pertinent suggestions for algorithm transparency, which has been widely called for but is highly controversial, based on the characteristics of artificial intelligence technology itself. Algorithmic transparency does not mean explaining every step of the algorithm, the technical principles of the algorithm, and the implementation details. Simply disclosing the source code of the algorithm system cannot provide effective transparency, but may threaten data privacy or affect the safe application of technology. Furthermore, considering the technical characteristics of AI, it is extremely difficult to understand the AI ​​system as a whole, and it is also ineffective to understand a specific decision made by AI. Therefore, for modern AI systems, achieving transparency by explaining how a certain result is obtained will face huge technical challenges and will greatly limit the application of AI; on the contrary, achieving effective transparency in the behavior and decision-making of AI systems will be more desirable and can also provide significant benefits. For example, considering the technical characteristics of artificial intelligence, GDPR does not require explanations for specific automated decisions, but only requires the provision of meaningful information about the internal logic and the explanation of the importance and expected consequences of automated decisions.

On the other hand, in terms of AI algorithm regulation, the EU believes that for most private sectors, their resources are limited, and the impact of their algorithmic decision-making results on the public is relatively limited, so strong regulation such as algorithmic impact assessment should not be imposed. If the private sector is required to adopt AIA blindly, the result will be that the financial and administrative costs it bears will be disproportionate to the risks brought by the algorithm, which will hinder the private sector's technological innovation and technology adoption. Therefore, low-risk algorithmic systems can be regulated by legal liability, allowing the private sector to exchange stricter tort liability for lower algorithm transparency and AIA requirements. According to the Governance Framework, corresponding regulatory mechanisms can be established at different levels: AIA requirements can be considered for algorithmic decision systems that may cause serious or irreversible consequences; for algorithmic systems that only have general impacts, system operators can be required to bear stricter tort liability, and at the same time, their obligations to evaluate and certify algorithmic systems and ensure that the systems meet the best standards can be reduced. At the same time, it is possible to consider establishing a special algorithmic regulatory agency, whose responsibilities include conducting algorithmic risk assessments, investigating the use of algorithmic systems by suspected infringers, providing advice to other regulatory agencies on algorithmic systems, and coordinating with standard-setting organizations, industries and civil society to determine relevant standards and best practices.

4. Strengthening international cooperation on algorithm governance

The management and operation of algorithmic systems also require cross-border dialogue and collaboration. Regulatory intervention by a country on algorithmic transparency and accountability is likely to be interpreted as protectionism or as improper acquisition of foreign trade secrets. Therefore, the Governance Framework recommends that a permanent Global Algorithmic Governance Forum (AGF) should be established to engage multiple stakeholders related to algorithmic technology in international dialogue, exchange policies and expertise, and discuss and coordinate best practices in algorithmic governance.

Enlightenment from EU Artificial Intelligence Ethics and Governance

In the past two decades of Internet development, the EU has lagged behind the United States and China, and differences in legal policies are one of the main factors. As the author pointed out in the article "On the Relationship between Internet Innovation and Regulation - Based on the Perspective of Comparison between the United States, Europe, Japan and South Korea", the EU's institutional regulations on platform responsibility, privacy protection, and online copyright are earlier and stricter than those of the United States, and have not provided a suitable legal system for Internet innovation. Today, as we enter the intelligent era, ubiquitous data and algorithms are giving rise to a new type of AI-driven economic and social form. The EU still lags behind the United States and other countries in the field of artificial intelligence. The General Data Protection Regulation (GDPR) that came into effect last year has a particularly great impact on the development and application of artificial intelligence. Many studies have shown that GDPR has hindered the development of new technologies and new things such as artificial intelligence and digital economy in the EU, and has added excessive burdens and uncertainties to business operations. [8]

Back to the field of artificial intelligence, the EU hopes to develop, apply, and deploy artificial intelligence embedded with ethical values ​​through institutional construction such as strategy, industrial policy, ethical framework, governance mechanism, and legal framework, so as to lead the international stage. In this regard, the EU does have its unique advantages. However, whether such advantages can eventually be transformed into the EU's international competitiveness in the field of artificial intelligence is worth pondering. Overall, the EU's exploration of artificial intelligence ethics and governance gives us three inspirations.

1. Exploring the technical path of ethical governance

Obviously, in the face of the ethical and social impact of the future development of artificial intelligence, it is necessary to make ethics a fundamental component of artificial intelligence research and development, and strengthen the construction of artificial intelligence ethical research and ethical governance mechanisms. At present, the realization of artificial intelligence ethical governance needs to rely more on the power of industry and technology rather than resorting to legislation and regulation. Because technology and business models are rapidly iterating, it is difficult for written legislation and regulation to keep up with the pace of technological development, which may bring counterproductive or unexpected effects. Standards, industry self-discipline, ethical frameworks, best practices, technical guidelines and other more flexible governance methods will become increasingly important, especially in the early stages of technological development. Going further, as in privacy and data protection, the concept of privacy by design (PbD) has gained strong vitality in the past decade, making the protection of personal privacy through technology and design an indispensable part of the data protection mechanism, and technical mechanisms such as encryption, anonymization, and differential privacy play an important role. Such a concept can also be transplanted to the field of artificial intelligence, so the European Union proposed "ethics by design" (ethical by design, referred to as EbD). In the future, it will be necessary to give vitality to the concept of "ethics by design" through standards, technical guidelines, design principles, etc., so as to transform ethical values ​​and requirements into components in the design of artificial intelligence products and services, and embed values ​​into technology.

2. Adopting a multi-stakeholder collaborative governance model

At present, artificial intelligence is integrating and mutually building with the economy and society at an extraordinary speed. Its high degree of specialization, technology and complexity make it difficult for outsiders to accurately judge and understand the risks and uncertainties. Therefore, on the one hand, it is necessary to allow regulators, decision makers, academia, industry, social public institutions, experts, practitioners, the public, etc. to participate in the governance of new technologies through the collaborative participation of multiple stakeholders to avoid the disconnection between decision makers and practitioners. On the other hand, it is necessary to enhance the ethical awareness of scientific researchers and the public through science and technology ethics education and publicity, so that they not only consider narrow economic interests, but also reflect on the potential impact of technological development and application and its prevention and conduct precautionary thinking. Only in this way can we achieve good governance of cutting-edge technologies through extensive social participation and interdisciplinary research. Therefore, the EU believes that the ethical governance of artificial intelligence requires safeguards at different levels by different subjects, so it is necessary for the government, industry, the public and other subjects to establish safeguards at their respective levels.

3. Strengthening international cooperation on AI ethics and governance

The development of artificial intelligence is closely related to data flow, data economy, digital economy, cybersecurity, etc., and the research and development and application of artificial intelligence have the characteristics of cross-border and international division of labor, which requires strengthening international cooperation and coordination in ethics and governance. For example, on May 22, 2019, OECD member countries approved the principles for artificial intelligence, namely the "Principles for the Responsible Management of Trustworthy AI". The ethical principles include five items, including inclusive growth, sustainable development and well-being, people-centered values ​​and fairness, transparency and explainability, robustness and safety, and responsibility. [9] On June 9, 2019, the G20 approved the human-centered AI principles, the main content of which is derived from the OECD AI principles. This is the first AI principle signed by governments and is expected to become an international standard in the future. It aims to promote the development of artificial intelligence under the concept of people-centered development, with practical and flexible standards and agile and flexible governance methods, and jointly promote the sharing of AI knowledge and the construction of trustworthy AI. [10] It can be seen that the concept of people-oriented AI development is gaining consensus in the international community. Under the guidance of this concept, we need to deepen international dialogue and exchanges, achieve a coordinated common AI ethics and governance framework at the international level, promote the research and development and application of trustworthy and ethical AI, prevent international and other risks that may be brought about by the development and application of AI, and ensure that science and technology are used for good and AI benefits mankind.

References:


[1]//www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect

[2]https://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/artificial-intelligence

[3]https://www.consilium.europa.eu/media/21620/19-euco-final-conclusions-en.pdf

[4]https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe

[5]https://www.europarl.europa.eu/doceo/document/A-8-2019-0019_EN.html#title2

[6]https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

[7]//www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_STU(2019)624262

[8]https://itif.org/publications/2019/06/17/what-evidence-shows-about-impact-gdpr-after-one-year

[9] https://www.oecd.org/going-digital/ai/principles/

[10]https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf

Source: Tencent Research Institute

<<:  Recalling the Wenzhou Radio and Television incident, will large-screen televisions face policy criticism again?

>>:  Sony Studio May Be Settled in Shanghai. Can the Chinese-Style PS4 Reverse Its Decline?

Recommend

Xiaoyu Yilian announced the completion of 125 million B round of financing

On the morning of March 22, Xiaoyu Yilian held a ...

iPhone 7 leaked again! The lens uses low-purity sapphire glass

Apple has claimed to use sapphire glass to protec...

Why are my allergy symptoms so severe but cannot be diagnosed?

Diagnosing allergies depends not only on immunolo...

Cancer is contagious, but not in the way you think

"Viral" and "contagious" canc...

How much does it cost to customize a beauty mini app in Shenzhen?

Shenzhen beauty mini program customization price ...

Six major trends in video marketing!

Video content is experiencing unprecedented growt...

Apple removes paid copycat version of "Travel Frog", you may have a fake "son"

Recently, according to Techweb, Apple App Store h...

Did the male mantis in "Black Cat Sheriff" really volunteer to be eaten?

If dating means potentially losing your life, wou...

Xiaomi App Store keyword optimization? Find out about these!

According to incomplete statistics, the traffic o...

Some people say that octopuses are alien intelligent life. Is this true?

Some time ago, the incident of a netizen encounte...

How to prevent fumonisin poisoning in summer?

How to prevent fumonisin poisoning in summer? Zha...