Unstoppable: The racist artificial intelligence problem is growing!

The most important element of artificial intelligence today deep learning is happening. Deep learning basically means feeding the algorithm with existing data and recognizing the data without human intervention. artificial intelligence systems that mostly white Americans work on, Asian and African When tested on humans, various problems can occur. A study also supports this situation.

According to the research, artificial intelligence algorithms, different ethnic groups may have racial biases, even if trained with data. An international team of researchers, such as mood memory and cognitive functional MRI (fMRI) After the scans, he analyzed how accurate the algorithms were on various measures of behavior and health.


Artificial intelligence attempted murder: I wanted to hurt you because you hurt me!

The YouTube phenomenon added a language algorithm to his childhood dream microwave oven. But this venture did not go as planned at all!

Researchers say racist behavior may have two causes

artificial intelligence algorithms, despite being trained on fairer datasets, can display a variety of racist behavior. For example, it was noted that a model trying to detect skin cancer was less effective at analyzing dark tones than lighter colors.

The research team, together with the data they obtained from a project, analyzed thousands of human brains. fMRI many scans, including experiment performed. According to the article white americans (WA) when the data it masters is trained, in African Americans (AA) prediction errors are higher. Besides, the interesting part is that even when the algorithms are trained on the data of African Americans alone, the errors do not disappear.

https://i0.wp.com/shiftdelete.net/wp-content/uploads/2022/05/Onune-gecilletme-racist-artificial-zeka-sorunu-buyuygu-3.jpg?resize=1170%2C658&ssl=1

Researchers do not yet know why the model behaves this way, but how data is collected He believes it may be connected. For example, during preprocessing, a custom and aligning the brains to a standard brain template might have compared individual brains. Second reason It may be that the data collected from the patients are not completely accurate.

Brain and Behavior Instituteresearch assistant at Jingwei Li confirmed that the study had measurements that differed in populations due to their ethnicity. However, to make systems less biased and more equitable, it is necessary to have different types of data. not enough also emphasizes.

https://i0.wp.com/shiftdelete.net/wp-content/uploads/2022/05/Onune-gecilletme-racist-artificial-zeka-sorunu-buyuygu-1.jpg?resize=1170%2C658&ssl=1

Algorithmic bias is something the US government is working on. NIST (National Institute of Standards and Technology)published a report this week that reached similar conclusions. The report includes the following words:

“Existing attempts to address the detrimental effects of algorithm bias continue to focus on computational factors such as the representativeness of datasets and the fairness of machine learning algorithms.”

So, what do you think about this subject? You can express your thoughts in the comments section or on the SDN Forum.

source site-30