MIT researchers recently made one of the most daring claims about artificial intelligence we’ve seen. They believe they have built an AI that can identify a person’s race using only medical images. and according to mass mediaThey have no idea how it works!
convinced. I want to sell the NFT of the Brooklyn Bridge.
Let’s be clear before, party team papernumber of models predict people’s self report Gyeongju:
In our study, we show that standard AI deep learning models can be trained to predict race in medical images with high performance across multiple imaging modalities.
Prediction and identification are two completely different things. If a prediction is wrong, it is still a prediction. If the ID is incorrect, it is a stigma. These are important distinctions.
AI models can be fine-tuned to predict anything, including concepts that are not real.
Here’s an old analogy I’d like to bring out in this situation.
I can predict with 100% accuracy how many lemons in a lemon tree are aliens from another planet.
I’m what you call a “database” because you’re the only one who can see aliens on lemons.
I can stand there next to your AI and point to all the lemons that contain aliens. The AI will try to figure out what lemon I’m pointing at. Then you will think that there are aliens in Lemon.
Eventually, the AI will look at the new lemon tree and try to guess which lemon it thinks has aliens.
If you’re 70% correct in guessing that, determining which lemon has aliens is still 0% accurate. Because there are no aliens in lemons.
In other words, an AI can be trained to predict everything as long as the following conditions are met:
- Don’t give them the option to say “I don’t know”.
- Continue adjusting the parameters of your model until you get the answer you want.
No matter how accurately AI systems predict labels, these predictions are useless for identification purposes, especially for issues involving individual humans, unless they can prove how they arrived at the prediction.
Also, claims about “accuracy” don’t mean what the media seem to think when it comes to these kinds of AI models.
The MIT model achieves less than 99% accuracy on labeled data. In other words, in the wild (see unlabeled image), you can never be sure that the AI has made the correct evaluation unless a human examines the results.
Even with 99% accuracy, MIT’s AI would still misclassify 79 million humans if provided with a database containing images of all living humans. And worse, there’s no way to know which 79 million humans have been misclassified unless we go to all 7.9 billion people on the planet and ask them to check the AI’s assessment of a particular image. This would defeat the purpose of using AI in the first place.
An important point: teaching the AI to identify labels in the database is not all databases with all labels. That’s not the way AI can do it. decision or identify A particular object in the database is just an attempt to predict the labels used by human developers.
The MIT team concluded in their paper that their model could be dangerous in the wrong hands.
The results of our study emphasize that the ability of AI deep learning models to predict self-reported races is not in itself a critical issue.
However, our finding that AI can accurately predict self-reported race even in damaged, cropped, and noisy medical images that clinical professionals often cannot do poses a huge risk to any model deployment in medical imaging.
It is important for AI developers to consider the potential risks of their creations. However, this particular warning has little basis in reality.
Models built by the MIT team can achieve benchmark accuracy on large databases, but as described above, there is no way to verify that the AI is correct without already knowing the real facts.
Basically, MIT warns against the potential of evil doctors and medical technologists using similar systems to create mass racism.
However, this AI cannot determine race. Predict the labels of a specific data set. The only way to identify using this model (or something similar) is to use a wide net, which is only possible if the identifier doesn’t care how many times the machine fails.
What you can be sure of is that you can’t be trusted unless you double-check your individual results against the actual facts. And the more images AI processes, the more mistakes it will make.
In summary, MIT’s “new” AI is just a magician’s fantasy. That’s a good thing, and a model like this is often very useful when doing things right isn’t as important as doing them quickly, but there’s no reason to believe bad actors can use them as contention detectors.
MIT You can apply the same modL It can be trained to go into a grove of lemon trees and use the label database I created to predict with 99% accuracy which lemons have aliens.
This AI can only predict labels. It does not identify race.