One week on from its launch, Amazon Go is an impressive state-of-the-art retail experience, but is it the future of shopping or a PR-fuelled own goal? Antony Edwards, CTO, Testplant, looks at how the store means Amazon has lost customer recommended purchases – one of the very things that fuelled its boom in growth in the first instance.
Amazon’s mission has always been to make shopping as easy and convenient as possible, and the company has consequently worked to redefine how shopping is experienced for its customers. Because of this, on the 22nd January, the first of Amazon’s hyped-up Amazon Go stores opened in Seattle.
At first glance, its latest idea seems almost flawless. Shoppers can enter with a simple scan of their mobile phones, choose what they want and leave—without a queue in sight. However, will the gates that scan each individual shopper in Amazon Go be able to register everyone and everything coming in and out of the stores?
For the scanning gates to work, Amazon uses a group of tools called Just Walk Out technology, which involves a mixture of computer vision, sensor fusion, and deep learning. This technology creates a customers’ ‘digital twin’, where the ‘twin’ of a customer interacts with a digital ‘twin’ of their store. It’s the same technology that is used in autonomous vehicles, which is essentially a digital replica of the driver. The problem here is the reliability, consistency and even adaptability of its pattern recognition system. If a ‘digital twin’ is created from an archetypical physical image of an object, will it be able to recognise objects that don’t fit this image? And if an object undergoes any physical deformity, can its digital version adjust its shape?
The problem is perhaps clearest with fruits, vegetables and other perishable products. Will Just Walk Out technology recognise loose fruits like grapes or berries? Will it pick up on those that grow into different shapes, like pears or avocados? And will it be able to scan ones that change colour, like bananas? Foods with physical differences won’t always align with their own ‘digital twins’, a problem that may soon cause Amazon a considerable amount of hassle. Ensuring the accuracy and learning capabilities of the ‘digital twin’ should now be a top priority.
Additionally, this testing needs to address the customers themselves. We’ve already seen similar problems emerging from similar innovations. For example, a soap dispenser in an Atlanta hotel couldn’t work for an African-American guest because the invisible light dispensed from the infrared LED bulb wasn’t reflected back to the sensor, voice interfaces often fail for people with Glaswegian accents, and haptics rarely consider users with mobility limitations. Companies producing sensor-centred products need to test the on multiple users, and not one single archetype. Testers need to ensure that technology benefits everyone.
The same applies to ‘digital twins’. If a sensor fails to recognise shopping items and products because their physical form isn’t recognised according to a categorised definition, the same can apply to the customers themselves, and what the sensor happens to recognise as ‘human’. For example, if a customer has a disability, will they still be recognised by an Amazon Go store?
While Amazon has almost definitely tested this possibility on wheelchair users, has it tested prosthetic limbs, a potentially more common situation for ex-military individuals, but what may not seem to pose a problem to sensors? Testers for Amazon may see a person with a prosthetic limb as having the same physicality for a ‘digital twin’ to be produced. But, the question them turns to the prosthetic materials impact on the sensors. Do Amazon Go’s sensors currently use sensors that will only recognise specific dimensions or movement patterns in shoppers’ legs? For Amazon Go customers to reap the benefits of these revolutionary stores, the process will need to work for all customers, and will consequently need to be tested on all kinds of individuals, whether disabled, not white, or otherwise.
While this is a new hurdle for Amazon Go to surmount, the popularity of such smart technology should make us question how far ‘digital twins’ will be developed. We’re already starting to see doctors using this technology for football players’ heads to predict where injures may occur, and create virtual models off buildings before construction begins. With the practice being utilised across such diverse industries, its use is unlikely to stop there.
The possibilities of digital twin technology is very exciting, but only if computer vision, sensor fusion and deep learning is tested and molded to changes in the individual, ensuring that all types of customers can use the technology. Digital twins need to be diversified, so that all humans are registered and not only those that fit into a system that recognises one archetypical form.
By Antony Edwards