As the pandemic continues to rage around the world, it’s becoming clear that COVID-19 will endure longer than some health experts initially predicted. Owing in part to slow vaccine rollouts, rapidly spreading new strains, and politically charged rhetoric around social distancing, the novel coronavirus is likely to become endemic, necessitating changes in the ways we live our lives.
Some of those changes might occur in brick-and-mortar retail stores, where touch surfaces like countertops, cash, credit cards, and bags are potential viral spread vectors. The pandemic appears to have renewed interest in cashierless technology like Amazon Go, Amazon’s chain of stores that allow shoppers to pick up and purchase items without interacting with a store clerk. Indeed, Walmart, 7-Eleven, and cashierless startups including AiFi, Standard, and Grabango have expanded their presence over the past year.
But as cashierless technology becomes normalized, there’s a risk it could be used for purposes beyond payment, particularly shoplifting detection. While shoplifting detection isn’t problematic on its face, case studies illustrate that it’s susceptible to bias and other flaws that could, at worst, result in false positives.
Synthetic datasets
The bulk of cashierless platforms rely on cameras, among other sensors, to monitor the individual behaviors of customers in stores as they shop. Video footage from the cameras feed into machine learning classification algorithms, which identify when a shopper picks up and places an item in a shopping cart, for example. During a session at Amazon’s re:Mars conference in 2019, Dilip Kumar, VP of Amazon Go, explained that Amazon engineers use errors like missed item detections to train the machine learning models that power its Go stores’ cashierless experiences. Synthetic datasets boost the diversity of the training data and ostensibly the robustness of the models, which use both geometry and deep learning to ensure transactions are associated with the right customer.
The problem with this approach is that synthetic datasets, if poorly audited, might encode biases that machine learning models then learn to amplify. Back in 2015, a software engineer discovered that the image recognition algorithms deployed in Google Photos, Google’s photo storage service, were labeling Black people as “gorillas.” Google’s Cloud Vision API recently mislabeled thermometers held by people with darker skin as guns. And countless experiments have shown that image-classifying models trained on ImageNet, a popular (but problematic) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more.
Jerome Williams, a professor and senior administrator at Rutgers University’s Newark campus, told NBC that a theft-detection algorithm might wind up unfairly targeting people of color, who are routinely stopped on suspicion of shoplifting more often than white shoppers. A 2006 study of toy stores found that not only were middle-class white women often given preferential treatment, but also that the police were never called on them, even when their behavior was aggressive. And in a recent survey of Black shoppers published in the Journal of Consumer Culture, 80% of respondents reported experiencing racial stigma and stereotypes when shopping.
“The people who get caught for shoplifting is not an indication of who’s shoplifting,” Williams told NBC. In other words, Black shoppers who feel they’ve been scrutinized in stores might be more likely to appear nervous while shopping, which might be perceived by a system as suspicious behavior. “It’s a function of who’s being watched and who’s being caught, and that’s based on discriminatory practices.”
Some solutions are explicitly designed to detect shoplifting track gait — patterns of limb movements — among other physical characteristics. It’s a potentially problematic measure considering that disabled shoppers, among others, might have gaits that appear suspicious to an algorithm trained on footage of able-bodied shoppers. As the U.S. Department of Justice’s Civil Rights Division, Disability Rights Section notes, some people with disabilities have a stagger or slurred speech related to neurological disabilities, mental or emotional disturbance, or hypoglycemia, and these characteristics may be misperceived as intoxication, among other states.
Tokyo startup Vaak’s anti-theft product, VaakEye, was reportedly trained on more than 100 hours of closed-circuit television footage to monitor the facial expressions, movements, hand movements, clothing choices, and over 100 other aspects of shoppers. AI Guardsman, a joint collaboration between Japanese telecom company NTT East and tech startup Earth Eyes, scans live video for “tells” like when a shopper looks for blind spots or nervously checks their surroundings.
NTT East, for one, makes no claims that its algorithm is perfect. It sometimes flags well-meaning customers who pick up and put back items and salesclerks restocking store shelves, a spokesperson for the company told The Verge. Despite this, NTT East claimed its system couldn’t be discriminatory because it “does not find pre-registered individuals.”
Walmart’s AI- and camera-based anti-shoplifting technology, which is provided by Everseen, came under scrutiny last May over its reportedly poor detection rates. In interviews with Ars Technica, Walmart workers said their top concern with Everseen was false positives at self-checkout. The employees believe that the tech frequently misinterprets innocent behavior as potential shoplifting.
Industry practices
Trigo, which emerged from stealth in July 2018, aims to bring checkout-less experiences to existing “medium to small” brick-and-mortar convenience stores. For a monthly subscription fee, the company supplies both high-resolution, ceiling-mounted cameras and an on-premises “processing unit” that runs machine learning-powered tracking software. Data is beamed from the unit to a cloud processing provider, where it’s analyzed and used to improve Trigo’s algorithms.
Trigo claims that it anonymizes the data it collects, that it can’t identify individual shoppers beyond the products they’ve purchased, and that its system is 99.5% accurate on average at identifying purchases. But when VentureBeat asked about what specific anti-shoplifting detection features the product offers and how Trigo trains algorithms that might detect theft, the company declined to comment.
Grabango, a cashierless tech startup founded by Pandora cofounder Will Glaser, also declined to comment for this article. Zippin says it requires shoppers to check in with a payment method and that staff is alerted only when malicious actors “sneak in somehow.” And Standard Cognition, which claims its technology can account for changes like when a customer puts back an item they initially considered purchasing, says it doesn’t and hasn’t ever offered shoplifting detection capabilities to its customers.
“Standard does not monitor for shoplifting behavior and we never have … We only track what people pick up or put down so we know what to charge them for when they leave the store. We do this anonymously, without biometrics,” CEO Jordan Fisher told VentureBeat via email. “An AI-driven system that’s trained responsibly with diverse sets of data should in theory be able to detect shoplifting without bias. But Standard won’t be the company doing it. We are solely focused on the checkout-free aspects of this technology.”
Separate interviews with The New York Times and Fast Company in 2018 tell a different story, however. Michael Suswal, Standard Cognition’s cofounder and chief operating officer, told The Times that Standard’s platform could look at a shopper’s trajectory, gaze, and speed to detect and alert a store attendant to theft via text message. (In the privacy policy on its website, Standard says it doesn’t collect biometric identifiers but does collect information about “certain body features.”) He also said that Standard hired 100 actors to shop for hours in its San Francisco demo store in order to train its algorithms to recognize shoplifting and other behaviors.
“We learn behaviors of what it looks like to leave,” Suswal told The Times. “If they’re going to steal, their gait is larger, and they’re looking at the door.”
A patent filed by Standard in 2019 would appear to support the notion that Standard developed a system to track gait. The application describes an algorithm trained on a collection of images that can recognize the physical features of customers moving in store aisles between shelves. This algorithm is designed to identify one of 19 different on-body points including necks, noses, eyes, ears, shoulders, elbows, wrists, hips, ankles, and knees.
In a statement emailed to VentureBeat, a Standard spokesperson said: “This patent is exclusively for our anonymous visual tracking system – a core piece of how we provide checkout – it is not used for intent recognition or anything related to shoplifting. We don’t do any gait recognition or other biometrics and we’re glad that we were able to clear up the discrepancy from the previous media stories that you mention. Bottom line is that our computer vision-based system isn’t able to identify people, we rely exclusively on a shopper checking in with their phone for us to get their payment information.”
Santa Clara-based AiFi also says its cashierless solution can recognize “suspicious behavior” inside of stores within a defined set of shopping behaviors. Like Amazon, the company uses synthetic datasets to generate a set of training and testing data without requiring customer data. “With simulation, we can randomize hairstyle, color, clothing, and body shape to ensure that we have a diverse and unbiased datasets,” a spokesperson told VentureBeat. “We respect user privacy and do not use facial recognition or personally identifiable information. It is our mission to change the future of shopping to make it automated, privacy-conscious, and inclusive.”
A patent filed in 2019 by Accel Robotics reveals the startup’s proposed anti-shoplifting solution, which optionally relies on anonymous tags that don’t reveal a person’s identity. By analyzing camera images over time, a server can attribute motion to a person and purportedly infer whether they took items from a shelf with malintent. Shopper behavior can be tracked over multiple visits if “distinguishing characteristics” are saved and retrieved for each visitor, which could be used to identify shoplifters who’ve previously stolen from the store.
“[The system can be] configured to detect shoplifting when the person leaves the store without paying for the item. Specifically, the person’s list of items on hand (e.g., in the shopping cart list) may be displayed or otherwise observed by a human cashier at the traditional cash register screen,” the patent description reads. “The human cashier may utilize this information to verify that the shopper has either not taken anything or is paying/showing for all items taken from the store. For example, if the customer has taken two items from the store, the customer should pay for two items from the store.”
Lack of transparency
For competitive reasons, cashierless tech startups are generally loath to reveal the technical details of their systems. But this does a disservice to the shoppers subjected to them. Without transparency regarding the applications of these platforms and the ways in which they’re developed, it will likely prove difficult to engender trust among shoppers, shoplifting detection capabilities or no.
Zippin was the only company VentureBeat spoke with that volunteered information about the data used to train its algorithms. It said that depending on the particular algorithm to be trained, the size of the dataset varies from a few thousand to a few million video clips, with training performed in the cloud and models deployed to the stores after training. But the company declined to say what steps it takes to ensure the datasets are sufficiently diverse and unbiased, whether it uses actors or synthetic data, and whether it continuously retrains algorithms to correct for errors.
Systems like AI Guardsman learn from their mistakes over time by letting store clerks and managers flag false positives as they occur. It’s a step in the right direction, but without more information about how these system work, it’s unlikely to allay shoppers’ concerns about bias and surveillance.
Experts like Christopher Eastham, a specialist in AI at the law firm Fieldfisher, call for frameworks to regulate the technology. And even Ryo Tanaka, the founder of Vaak, argues there should be notice before customers enter stores so that they can opt out. “Governments should operate rules that make stores disclose information — where and what they analyze, how they use it, how long they use it,” he told CNN.
No comments:
Post a Comment