Microsoft-funded start-up may have used their facial recognition technology for mass surveillance in Israel

Reading time icon 4 min. read


Readers help support MSPoweruser. When you make a purchase using links on our site, we may earn an affiliate commission. Tooltip Icon

Read the affiliate disclosure page to find out how can you help MSPoweruser effortlessly and without spending any money. Read more

AnyVision Better Tomorrow facial recognition

Microsoft has always advocated for the ethical use of AI technology, while on the other hand selling the self-same technology for dubious use to governments, arguing that regulation should stem from the democratic process rather than ethical concerns of employees and shareholders.

Every once in awhile the hypocrisy of this position is exposed, and on this occasion, it is with accusations that a Microsoft-funded start-up, AnyVision, is using AI facial recognition technology to track Palestinians working in Israel around the West Bank.

According to an NBC report Anyvision uses a network of thousands of cameras around the West Bank to track the movement of Palestinians using a system called “Google Ayosh”, where “Ayosh” refers to the occupied Palestinian territories and “Google” denotes the technology’s ability to search for people as if they are items on Google search.

AnyVision won a top defence prize in Israel in 2018 and was lauded by Israel’s defence minister for preventing “hundreds of terror attacks” using “large amounts of data.”

AnyVision only admits to using their biometric technology for border crossing control and claimed the company was the “most ethical company known to man.”

A document on Israel Government website, however, lists the features of AnyVision Better Tomorrow as:

  1.  Face recognition In the wild: Enrollment of individuals is not required. All individuals passing through the field of view are detected automatically without having to be stopped or interrupted. The system requires only 45×45 pixels to Detect & Recognize faces.
  2. One-to-Many: means one sensor (Fixed CCTV, PTZ, Mobile device and others) that extracts and recognizes all the faces in the field of view (vs. common technology which allows for one individual per one sensor).
  3. Utilizing the existing CCTV: using existing (on-premise) camera infrastructure, not requiring any special cameras or infrastructure upgrades.
  4. Mass scale: Designed for real-time mass crowd detection and recognition, with detection time of 0.2 seconds per database of up to 300 million individuals.
  5. Ethnic in-variance: Our Neural nets are trained to detect and recognize faces of various ethnicities.
  6. Face obstructions: Invariant to disguises, sunglasses, facial hair, hats, makeup etc.
  7. Accuracy and reliability: Accuracy level of up to 99.78% true positive rate and less than 1 false positive alarm per 25,000
    people.

Microsoft has responded to the report by hiring former U.S. Attorney General Eric Holder to investigate whether the use of facial recognition technology such as AnyVision Better Tomorrow complies with its ethical principles.

Microsoft told NBC “Microsoft takes these mass surveillance allegations seriously because they would violate our facial recognition principles.”

According to these principles, Microsoft would “advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.”

“If we discover any violation of our principles, we will end our relationship.”

On the wider front, Microsoft backed a U.S. Senate bill that would require a court order before federal law enforcement could use the technology for targeted, ongoing surveillance. Neema Singh Guliani, senior legislative counsel for the American Civil Liberties Union, however, said the bill “falls woefully short of protecting people’s privacy rights.”

Microsoft currently sells similar technology to AnyVision however but directed to workplace surveillance, showing the double-edged nature of the technology.

There have been recent reports of facial recognition technology being used in China for what has been called automated racism, where discriminative practices and restrictions are being applied to the Uyghurs population there using facial recognition technology, facilitated by work by US AI academics, on occasion working for Microsoft.

Do our readers think Microsoft, who topped Forbes’s list of most ethical companies, are acting according to their stated principles? Let us know below.

More about the topics: AnyVision, anyvision better tomorrow, facial recognition, microsoft

Leave a Reply

Your email address will not be published. Required fields are marked *