The metaverse is coming but we still don’t trust AI


PTI, May 18, 2022, 11:15 AM IST

Many people believe that technology is neutral or unbiased, but the reality is more complex, especially in an immersive environment.

If the proponents of the metaverse have their way we’ll one day be lining up for healthcare or a mortgage in a virtual world run by virtual decision-makers. The design of the artificial intelligence systems driving this world, still the task of humans, has real potential for harm.

Besides commercial incentives, implicit biases that exist offline based on ethnicity, gender, and age are often reflected in the big data collected from the internet.

Machine learning models that are trained using these bias-embedded datasets unsurprisingly adopt these biases. For instance, in 2019, Facebook (now Meta) was sued by the US Department of Housing and Urban Development for “encouraging, enabling, and causing” discrimination on race, gender, and religion through its advertising platform.

Facebook later said it would take “meaningful steps” to stop such behavior, but it continued to deliver the same discriminative ad service to over two billion users based on their demographic information. Technical flaws during data collection, sampling, and model design can further exacerbate unfairness by introducing outliers, sampling bias, and temporal bias (where a model works well at first but fails in the future because future changes weren’t considered when building the model).

As AI pervades more of our daily lives regardless, governments and the tech giants have started talking about “Trustworthy AI”, a term formalized by the European Commission in 2019 with its Ethics Guidelines.

The guidelines speak to issues of fairness, but current systems are already challenged to define what’s fair on the current internet, let alone in the metaverse.

A recent study exploring Trustworthy AI and the metrics selected to deliver it found most were based on the functionality-driven design as opposed to user-centered design.

Looking specifically at search engine ranking and recommendation systems, we already know search engine rankings sometimes systematically favor certain sites over others, distorting the objectiveness of the results and losing the trust of users. In recommendation systems, the number of recommendations is often fixed to promote products or ads with greater commercial benefits instead of fair recommendations based on the ethical use of data.

To fix these issues and deliver “trustworthy AI”, search engines must guarantee that users receive neutral and impartial services. Deciding on the metrics for fairness is where it gets difficult. A common strategy of metric selection is to focus on one factor and measure the deviation from the equality of that factor.

For example, for search engine rankings, focusing on the (potential) attention items receive from users in terms of factors such as click-through rates, exposure, or inferences of the content’s relevance. And then working out the gap between what an average user sees versus one where there is bias at play.

Reviewers of recommender systems have employed similar metrics for fairness, such as bias, average, and score disparities.

Trustworthy AI design and metric selection in such systems also often focus on functionality during specific life-cycle phases. Ideally, it should consider trustworthiness through the whole life-cycle of usage.

These considerations will be even more important in the metaverse. Immersive in nature, the metaverse is more tied to users’ feelings and experiences than current cyberspace. These experiences are harder to quantify and assess and pose more challenges for those trying to determine what “fair AI” is.

The current mindset of trustworthy AI design and metric selection, restricted by the aforementioned design philosophies, takes into consideration only part of human cognition, specifically the conscious and concrete areas that can be more easily measured and quantified.

Pattern recognition, language, attention, perception, and action are widely explored by AI communities. The exploration of the unconscious and abstract areas of cognition, such as mental health and emotions, is still new. Methodological limits are a key reason for this, for example, the lack of devices and theories to accurately capture bioelectrical signals and infer someone’s emotions from them.

A new set of metrics will be required for the metaverse to ensure fairness. Designers will need to: Carefully select data. It’s dangerous to just throw data at an AI model: the data often inherits the bias from the real world where it was collected. System operators should carefully select data samples focused on ensuring data diversity.

Design a fair system. The system should guarantee all users have neutral usage and not be influenced by factors such as age, education level, environment, etc. A fair system design can help ensure the diversity of data collection.

Design a fair AI algorithm. Aiming at improving the utility of the majority, AI algorithms normally prioritize the optimization of common performance metrics such as accuracy.

For this reason, many AI algorithms set up thresholds to avoid the participation of users that may impact this goal, such as those with bad networks. Balancing the trade-off between algorithm performance and fairness is important in fair AI algorithm design.

Ensure fair usage. After designing a fair system and algorithm and training with fairly collected data samples, the next step is to ensure fair usage for all users without bias based on ethnicity, gender, age, etc.

This last piece of the cycle is the key to sustaining fairness by allowing continuous collection of diverse data and user feedback to optimize fairness.

Udayavani is now on Telegram. Click here to join our channel and stay updated with the latest news.

Top News

VTU introduces 3.5-year ‘fast-track’ engineering degrees

Mangaluru: Kerala-based agency allegedly defrauds over 130 job seekers

Violence continues in Manipur; BJP and Congress offices vandalised

Delhi Air Pollution | No physical classes for students, barring class 10 and 12: CM Atishi

PM Modi arrives in Brazil to attend G20 Summit on tour’s second leg

Mangaluru: Drive to remove unauthorized flexes and banners begins

Congress’ guarantees implemented in Karnataka amid BJP’s false propaganda: Shivakumar

Related Articles More

Key to past: Indore man collects 570 typewriters from across the world

Kambala: Tradition and modernity in coastal Karnataka

Dairy farmers in K’taka border areas selling milk to Kerala for higher price!

Kadaba: ‘Notion that only English-medium education leads to success is misleading’

Udupi: ‘Team Taulava’ takes charge to preserve endangered ancient monuments

MUST WATCH

Swimming pool

| ₹50 LAKH SEIZED FROM TIRE |

New Technology In Kambala

Lakshdeepotsava 2024 Shree Krishna Mutt

Punganur Cow


Latest Additions

Bengaluru: Techie duped of Rs 40 lakh in fake drug case scam

Bengaluru: Teen defaces Bhuvaneshwari idol in after failing SSLC exam

VTU introduces 3.5-year ‘fast-track’ engineering degrees

Geethartha Chinthane 97: Replacing delusion with sense of duty

Mangaluru: Kerala-based agency allegedly defrauds over 130 job seekers

Thanks for visiting Udayavani

You seem to have an Ad Blocker on.
To continue reading, please turn it off or whitelist Udayavani.