Steve Thomas - IT Consultant

If the measure of progress in technology is that devices should become ever smaller and more capable, then OrCam Technologies is on a roll. The Israeli firm’s OrCam MyEye, which fits on the arm of a pair of glasses, is far more powerful and much smaller than its predecessor. With new AI-based Smart Reading software released in July, the device not only “reads” text and labels but also identifies people by name and describes other important aspects of the visual world. It also interacts with the user, principally people who are blind or visually impaired, by means of an AI-based smart voice assistant.

At the upcoming Sight Tech Global virtual event, we’re pleased to announce that OrCam’s co-founder and co-CEO, Professor Amnon Shashua, will be a featured speaker. The event, which will take place virtually on Dec. 2-3, is focused on how AI-related technologies will influence assistive technology and accessibility in the years ahead. Attendance is free and pre-registration is open now.

Shashua is a towering figure in the technology world. He is not only the co-founder of OrCam but also Mobileye, the company that provides the computer-vision sensors and systems for automotive safety and autonomous navigation. Intel acquired Mobileye for $15.3 billion in 2017, the single-largest acquisition of an Israeli company ever.

Shashua started OrCam at the prompting of his aunt, who was losing her sight and hoped that her technologist nephew could apply his prodigious talents as a scientist and AI expert to help. With that goal in mind, he started OrCam in 2010 with co-founder Ziv Aviram. The firm has gone on to raise $130.4 million dollars from investors, including Intel, and sell the OrCam MyEye device to tens of thousands of users in over 50 countries. At $3900 per device in the U.S., the OrCam MyEye is far from affordable for most people, but the firm says the device price will come down as production increases.

At the start of a new era for assistive technology, OrCam’s approach with the lightweight, offline-operating OrCam MyEye is nothing if not thought provoking (the device was recognized as a TIME Best Invention of 2019). Will miniaturization of sophisticated sensors and electronics lead to unobtrusive sensor arrays as the foundation of assistive tech? Will the AI-based natural-language processing lead to an all-purpose, customizable personal assistants that work with abilities as needed?

“In OrCam’s roadmap,” says Shashua, “the ultimate AT must have the right balance between computer vision and natural language processing. For example, the “smart reading” feature recently launched harnesses NLP (natural language processing) in order to guide the device to which text information to extract and communicate to the user. NLP allows the user to specify precisely what he/she needs to know. For example, the “orientation” feature recently launched allows the user to prompt the device to describe the objects in the scene and to provide audible guidance to those objects. We see the “orientation” feature growing with respect to vocabulary, with respect to search (e.g., “notify me when you see a Toilet sign”), and with respect to obstacle avoidance (where is the free-space in the scene). The technological challenge in bringing these desires into reality critically depends on the progress of compute and algorithms.

“By ‘compute,'” says Shashua,  “I mean the ever-growing trend to miniaturize processing power enables more sophisticated algorithms to reside on smaller and battery-powered footprint. By “algorithms” I mean the ever-increasing sophistication of deep-tech to mimic human intelligence. Combining the two creates a powerful impact on the future of assistive tech for people who are blind and visually impaired.”

Shashua received a B.Sc in mathematics and computer science from Tel-Aviv University in 1985 and his M.Sc in computer science in 1989 from the Weizmann Institute of Science. He received a Ph.D in brain and cognitive sciences in 1993 from the Massachusetts Institute of Technology (MIT), while working at the Artificial Intelligence Laboratory.

Sight Tech Global is a virtual event on Dec. 2-3 and attendance is free. Pre-registration is open now. 

Sight Tech Global welcomes sponsors. Current sponsors include Verizon Media, Google, Waymo, Mojo Vision and Wells Fargo, The event is organized by volunteers and all proceeds from the event benefit The Vista Center for the Blind and Visually Impaired in Silicon Valley.

Google today announced a few new features Wear OS we can expect from the next over-the-air update, which is slated to arrive this fall.

The focus here, Google says, is on fundamentals and that includes improved performance, for example, with up to 20% speed improvements for app startup times, for example.

The company also said it would improve the pairing process and that we’ll see UI improvements with “more intuitive controls for managing different watch modes and workouts.” What exactly that will look like isn’t clear, though, as Google didn’t provide any details of the changes.

Image Credits: Google

One feature that Google talked about though is a new handwashing timer it is releasing in response to the COVID-19 pandemic. Unlike Apple’s automatic handwashing timer in watchOS, Google’s feature is not hands free and you have to tap a dedicated tile to trigger it, which sadly makes it less likely that users will actually regularly use it (but you’re already singing Happy Birthday twice while you’re washing your hands anyway, right?).

Wear OS is also getting a new weather experience that will be easier to read on the go, provide an hourly forecast and access to local weather alerts.

Image Credits: Google

The Wear OS team notes that it also plans to bring “the best of Android 11” to wearables. For developers, that mostly means being able to use the latest Android developer tools to build their Wear OS apps. What exactly it means for users also remains to be seen.

While we’re still waiting for Google to release its own watch, the company today noted that a number of new watch OEMs have recently signed on Wear OS, including Oppo, Suunto and Xiaomi.

In a set-back for Google’s plan to acquire health wearable company Fitbit, the European Commission has announced it’s opening an investigation to dig into a range of competition concerns being attached to the proposal from multiple quarters.

This means the deal is on ice for a period of time that could last until early December.

The Commission said it has 90 working days to take a decision on the acquisition — so until December 9, 2020.

Commenting on opening an “in-depth investigation” in a statement, Commission EVP Margrethe Vestager — who heads up both competition policy and digital strategy for the bloc — said: “The use of wearable devices by European consumers is expected to grow significantly in the coming years. This will go hand in hand with an exponential growth of data generated through these devices. This data provides key insights about the life and the health situation of the users of these devices.Our investigation aims to ensure that control by Google over data collected through wearable devices as a result of the transaction does not distort competition.”

Google has responded to the EU brake on its ambitions with a blog post in which its devices & services chief seeks to defend the deal, arguing it will spur innovation and lead to increased competition.

“This deal is about devices, not data,” Google VP Rick Osterloh further claims.

The tech giant announced its desire to slip into Fitbit’s data-sets back in November, when it announced a plan to shell out $2.1BN in an all-cash deal to pick up the wearable maker.

Fast forward a few months and CEO Sundar Pichai is being taken to task by lawmakers on home turf for stuff like ‘helping destroy anonymity on the Internet‘. Last year’s already rowdy antitrust drum beat around big tech has become a full on rock festival so the mood music around tech acquisitions might finally be shifting.

Since news of Google’s plan to grab Fitbit dropped concerns about the deal have been raised all over Europe — with consumer groups, privacy regulators and competition and tech policy wonks all sounding the alarm at the prospect of letting the adtech giant gobble a device maker and help itself to a bunch of sensitive consumer health data in the process.

Digital privacy rights group, Privacy International — one of the not-for-profits that’s been urging regulators not to rubberstamp the deal — argues the acquisition would not only squeeze competition in the nascent digital health market, and also for wearables, but also reduce “what little pressure there currently is on Google to compete in relation to privacy options available to consumers (both existing and future Fitbit users), leading to even less competition on privacy standards and thereby enabling the further degradation of consumers’ privacy protections”, as it puts it.

So much noise is being made that Google has already played the ‘we promise not to…’ card that’s a favorite of data-mining tech giants. (Typically followed, a few years later, with a ‘we got ya sucker’ joker — as they go ahead and do the thing they totally said they wouldn’t.)

To wit: From the get-go Fitbit has claimed users’ “health and wellness data will not be used for Google ads”. Just like WhatsApp said nothing would change when Facebook bought them. (Er.)

Last month Reuters revisited the concession, in an “exclusive” report that cited “people familiar with the matter” who apparently told it the deal could be waved through if Google pledged not to use Fitbit data for ads.

It’s not clear where the leak underpinning its news report came from but Reuters also ran with a quote from a Google spokeswoman — who further claimed: “Throughout this process we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control with their data.”

In the event, Google’s headline-grabbing promises to behave itself with Fitbit data have not prevented EU regulators from wading in for a closer look at competition concerns — which is exactly as it should be.

In truth, given the level of concern now being raised about tech giants’ market power and adtech giant Google specifically grabbing a treasure trove of consumer health data, a comprehensive probe is the very least regulators should be doing.

If digital policy history has shown anything over the past decade and where data is concerned it’s that the devil is always in the fine print detail. Moreover the fast pace of digital markets can mean a competitive threat may only be a micro pivot away from materializing. Theories of harm clearly need radically updating to take account of data-mining technosocial platform giants. And the Commission knows that — which is why it’s consulting on giving itself more powers to tackling tipping in digital markets. But it also needs to flex and exercise the powers it currently has. Such as opening a proper investigation — rather than gaily waving tech giant deals through.

Antitrust may now be flavor of the month where tech giants are concerned — with US lawmakers all but declaring war on digital robber barons at last month’s big subcommittee showdown in Congress. But it’s also worth noting EU competition regulators — for all their heavily publicized talk of properly regulating the digital sphere — have yet to block a single digital tech merger.

And it remains to be seen whether that record will change by December.

“The Commission is concerned that the proposed transaction would further entrench Google’s market position in the online advertising markets by increasing the already vast amount of data that Google could use for personalisation of the ads it serves and displays,” it writes in a press release today.

Following a preliminary assessment process of the deal, EU regulators said they have concerns about [emphasis theirs]:

  • “the impact of the transaction on the supply of online search and display advertising services (the sale of advertising space on, respectively, the result page of an internet search engine or other internet pages)”
  • and on “the supply of ‘ad tech’ services (analytics and digital tools used to facilitate the programmatic sale and purchase of digital advertising)”

“By acquiring Fitbit, Google would acquire (i) the database maintained by Fitbit about its users’ health and fitness; and (ii) the technology to develop a database similar to Fitbit’s one,” the Commission further notes.

“The data collected via wrist-worn wearable devices appears, at this stage of the Commission’s review of the transaction, to be an important advantage in the online advertising markets. By increasing the data advantage of Google in the personalisation of the ads it serves via its search engine and displays on other internet pages, it would be more difficult for rivals to match Google’s online advertising services. Thus, the transaction would raise barriers to entry and expansion for Google’s competitors for these services, to the ultimate detriment of advertisers and publishers that would face higher prices and have less choice.”

The Commission views Google as dominant in the supply of online search advertising services in almost all EEA (European Economic Area) countries; as well as holding “a strong market position” in the supply of online advertising display services in a large number of EEA countries (especially off-social network display ads), and “a strong market position” in the supply of adtech services in the EEA.

All of which will inform its considerations as it looks at whether Google will gain an unfair competitive advantage by assimilating Fitbit data. (Vestager has also issued a number of antitrust enforcements against the tech giant in recent years, against Android, AdSense and Google Shopping.)

The regulator has also said it will further look at:

  • the “effects of the combination of Fitbit’s and Google’s databases and capabilities in the digital healthcare sector, which is still at a nascent stage in Europe”
  • “whether Google would have the ability and incentive to degrade the interoperability of rivals’ wearables with Google’s Android operating system for smartphones once it owns Fitbit”

The tech giant has already offered EU regulators one specific concession in the hopes of getting the Fitbit buy green lit — with the Commission noting that it submitted commitments aimed at addressing concerns last month.

Google suggested creating a data silo to hold data collected via Fitbit’s wearable devices — and where it said it would be kept separate from any other dataset within Google (including claiming it would be restricted for ad purposes). However the Commission expresses scepticism about Google’s offer, writing that it “considers that the data silo commitment proposed by Google is insufficient to clearly dismiss the serious doubts identified at this stage as to the effects of the transaction”.

“Among others, this is because the data silo remedy did not cover all the data that Google would access as a result of the transaction and would be valuable for advertising purposes,” it added.

Google makes reference to this data silo in its blog post, claiming: “This deal is about devices, not data. We’ve been clear from the beginning that we will not use Fitbit health and wellness data for Google ads. We recently offered to make a legally binding commitment to the European Commission regarding our use of Fitbit data. As we do with all our products, we will give Fitbit users the choice to review, move or delete their data. And we’ll continue to support wide connectivity and interoperability across our and other companies’ products.”

“We appreciate the opportunity to work with the European Commission on an approach that addresses consumers’ expectations of their wearable devices. We’re confident that by working closely with Fitbit’s team of experts, and bringing together our experience in AI, software and hardware, we can build compelling devices for people around the world,” it adds.

The potential for the Internet of Things to lead to distortion in market competition is troubling European Union lawmakers who have today kicked off a sectoral inquiry.

They’re aiming to gather data from hundreds of companies operating in the smart home and connected device space — via some 400 questionnaires, sent to companies big and small across Europe, Asia and the US — using the intel gleaned to feed a public consultation slated for early next year when the Commission will also publish a preliminary report. 

In a statement on the launch of the sectoral inquiry today, the European Union’s competition commissioner, Margrethe Vestager, said the risks to competition and open markets linked to the data collection capabilities of connected devices and voice assistants are clear. The aim of the exercise is therefore to get ahead of any data-fuelled competition risks in the space before they lead to irreversible market distortion.

“One of the key issues here is data. Voice assistants and smart devices can collect a vast amount of data about our habits. And there’s a risk that big companies could misuse the data collected through such devices, to cement their position in the market against the challenges of competition. They might even use their knowledge of how we access other services to enter the market for those services and take it over,” said Vestager.

“We have seen this type of conduct before. This is not new. So we know there’s a risk that some of these players could become gatekeepers of the Internet of Things, with the power to make or break other companies. And these gatekeepers might use that power to harm competition, to the detriment of consumers.”

The Commission recently opened up a consultation on whether regulators needs new powers to address competition risks in digital markets, including being able to intervene when they suspect digital market tipping.

It is also asking for views on how to shape regulations around platform governance.

The IoT sectorial enquiry adds another plank to its approach towards reformulating digital regulation in the data age. (Notably competition chief Vestager is simultaneously the Commission EVP in charge of pan-EU digital strategy.)

On the IoT front, risks Vestager said she’s concerned about include what she couched as familiar antitrust behaviour such as “self-preferencing” — i.e. a company directing users towards its own products or services — as well as companies inking exclusive deals to send users “preferred” provider, thereby locking out more open competition.

“Whether that’s for a new set of batteries for your remote control or for your evening takeaway. In either case, the result can be less choice for users, less opportunity for others to compete, and less innovation,” she suggested.

“The trouble is that competition in digital markets can be fragile,” Vestager added. “When big companies abuse their power, they can very quickly push markets beyond the tipping point, where competition turns to monopoly. We’ve seen that happen before.  If we don’t act in good time, there’s a serious risk that it will happen again, with the Internet of Things.”

The commissioner’s remarks suggest EU lawmakers could be considering regulations that aim to enforce interoperability between smart devices and platforms — although Vestager also said they will be asking about any barriers to achieving such cross-working.

“For us to get the most out of the Internet of Things, our smart devices need to communicate. So if the devices from different companies don’t work together, then consumers may be locked in to just one provider.  And be limited to what that provider has to offer,” she said.

“We’re asking about the products they sell, and how the markets for those products work. We’re asking about data – how it’s collected, how it’s used, and how companies make money from the data they collect. And we’re asking about how these products and services work together, and about possible problems with making them interoperable.”

Vestager has raised concerns about the potential for voice assistant technology to lead to market concentration and distortion before — saying last year that they present an acute challenge to regulators who she said then were “trying to figure out how access to data will change the marketplace”.

The question of how access to digital data feeds platform monopolies has been a long standing preoccupation for the now second term competition chief.

The Commission’s work on figuring out how data access changes marketplace function remains something of a work in progress. Vestager has an open investigation into Amazon’s use of third party data on her plate, for example. It also inked a first set of rules on ecommerce platform fairness last year.

More rules may be incoming in a draft proposal for reformulating wider liability rules for platforms that’s slated to land by the end of this year, aka the forthcoming Digital Services Act.

The Commission noted today that a prior sector inquiry — into ecommerce markets — helped shape new rules against “unjustified geoblocking” in the EU, although it has not yet been able to dismantle geoblocking barriers to accessing digital services across the Single Market’s internal borders.

Google confirmed today via blog post that it has acquired Canadian smart glasses company North, which began life as human interface hardware startup Thalmic Labs in 2012. The company didn’t reveal any details about the acquisition, which was first reported to be happening by The Globe and Mail, last week. The blog post is authored by Google’s SVP of Devices & Services Rick Osterloh, which cites North’s “strong technology foundation” as a key driver behind the deal.

Osterloh also emphasizes Google’s existing work in building “ambient computing,” which is to say computing that fades into the background of a user’s life, as the strategic reasoning behind the acquisition. North will join Google’s existing team in the Kitchener-Waterloo area, where North is already based, and it will aid with the company’s “hardware efforts and ambient computing future,” according to Osterloh.

In a separate blog post, North’s co-founders Stephen Lake, Matthew Bailey and Aaron Grant discuss their perspective on the acquisition. They say the deal makes sense because it will help “significantly advance our shared vision,” but go on to noted that this will mean winding down support for Focals 1.0, the first-generation smart glasses product that North released last year, and cancelling any plans to ship Focals 2.0, the second-generation version that the company had been teasing and preparing to release over the last several months.

Focals received significant media attention following their release, and provided the most consumer-friendly wearable glasses computing interface ever launched. They closely resembled regular optical glasses, albeit with larger arms to house the active computing components, and projected a transparent display overlay onto one frame which showed things like messages and navigation directions.

Around the Focals 1.0 debut, North co-founder and CEO Stephen Lake told me that the company had originally begun developing its debut product, the Myo gesture control armband, to create a way to interact naturally with the ambient smart computing platforms of the future. Myo read electrical pulses generated by the body when you move your arm and translated that into computer input. After realizing that devices it was designed to work with, including VR headsets and wearable computers like Google Glass, weren’t far enough along for its novel control paradigm to take off, they shifted to addressing the root of the problem with Focals.

Focals had some major limitations, however, including initially requiring that anyone wanting to purchase them go into a physical location for fitting, and then return for adjustments once they were ready. They were also quite expensive, and didn’t support the full range of prescriptions needed by many existing glasses-wearers. Software limitations, including limited access to Apple’s iMessage platform, also hampered the experience for Apple mobile device users.

North (and Myo before it) always employed talented and remarkable mechanical electronics engineers sources from the nearby University of Waterloo, but its ideas typically failed to attract the kind of consumer interest that would’ve been required for sustained independent operation. The company had raised nearly $200 million in funding since its founding; as mentioned, no word on the total amount Google paid, but it doesn’t seem likely to have been a blockbuster exit.

There are some wearables out there in the world that are making claims around COVID-19 and their ability to detect it, prevent it, certify that you don’t have it, and more. But a new wearable device from NASA’s Jet Propulsion Laboratory might actually be able to do the most to prevent the spread of COVID-19 – and it’s not really all that technically advanced or complicated.

JPL’s PULSE wearable uses 3D-printed parts and readily available, affordable electronic components to do just one thing: remind a person not to touch their face. JPL’s designers claim that its simple enough that the gadget “can easily be reproduced by anyone regardless of their level of expertise,” and to encourage more people and companies to actually do that, the lab has made available a full list of parts, 3D modelling files and full instructions for its assembly via an open source license.

The PULSE is essentially a pendant, worn between six inches and 1 foot from the head around the neck, which can detect when a person’s hand is approaching their face using gan IR-based proximity sensor. A vibration motor then shakes out an alert, and the response becomes stronger as your hand gets closer to your face.

The hardware itself is simple – but that’s the point. It’s designed to run on readily available 3V coin batteries, and if you have a 3D printer to hand for the case and access to Amazon, you can probably put one together yourself at home in no time.

The goal of PULSE obviously isn’t to single-handedly eliminate COVID-19 – contact transmission from contaminated hands to a person’s mouth, nose or eyes is just one vector, and it seems likely that respiratory droplets that result in airborne transmission is at least as effective at passing’s the virus around. But just like regular mask-wearing can dramatically reduce transmission risk, minimizing how often you touch your face can have a big combinatory effect with other measures taken to reduce the spread.

Other health wearables might actually be able to tell you when you have COVID-19 before you show significant symptoms or have a positive test result – but work still needs to be done to understand how well that works, and how it could be used to limit exposure. JPL’s Pulse has the advantage of being effective now in terms of building positive habits that we know will limit the spread of COVID-19, as well as other viral infections.

Enterprise barcode scanner company Scandit has closed an $80 million Series C round, led by Silicon Valley VC firm G2VP. Atomico, GV, Kreos, NGP Capital, Salesforce Ventures and Swisscom Ventures also participated in the round — which brings its total raised to date to $123M.

The Zurich-based firm offers a platform that combines computer vision and machine learning tech with barcode scanning, text recognition (OCR), object recognition and augmented reality which is designed for any camera-equipped smart device — from smartphones to drones, wearables (e.g. AR glasses for warehouse workers) and even robots.

Use-cases include mobile apps or websites for mobile shopping; self checkout; inventory management; proof of delivery; asset tracking and maintenance — including in healthcare where its tech can be used to power the scanning of patient IDs, samples, medication and supplies.

It bills its software as “unmatched” in terms of speed and accuracy, as well as the ability to scan in bad light; at any angle; and with damaged labels. Target industries include retail, healthcare, industrial/manufacturing, travel, transport & logistics and more.

The latest funding injection follows a $30M Series B round back in 2018. Since then Scandit says it’s tripled recurring revenues, more than doubling the number of blue-chip enterprise customers, and doubling the size of its global team.

Global customers for its tech include the likes of 7-Eleven, Alaska Airlines, Carrefour, DPD, FedEx, Instacart, Johns Hopkins Hospital, La Poste, Levi Strauss & Co, Mount Sinai Hospital and Toyota — with the company touting “tens of billions of scans” per year on 100+ million active devices at this stage of its business.

It says the new funding will go on further pressing on the gas to grow in new markets, including APAC and Latin America, as well as building out its footprint and ops in North America and Europe. Also on the slate: Funding more R&D to devise new ways for enterprises to transform their core business processes using computer vision and AR.

The need for social distancing during the coronavirus pandemic has also accelerated demand for mobile computer vision on personal smart devices, according to Scandit, which says customers are looking for ways to enable more contactless interactions.

Another demand spike it’s seeing is coming from the pandemic-related boom in ‘Click & Collect’ retail and “millions” of extra home deliveries — something its tech is well positioned to cater to because its scanning apps support BYOD (bring your own device), rather than requiring proprietary hardware.

“COVID-19 has shone a spotlight on the need for rapid digital transformation in these uncertain times, and the need to blend the physical and digital plays a crucial role,” said CEO Samuel Mueller in a statement. “Our new funding makes it possible for us to help even more enterprises to quickly adapt to the new demand for ‘contactless business’, and be better positioned to succeed, whatever the new normal is.”

Also commenting on the funding in a supporting statement, Ben Kortlang, general partner at G2VP, added: “Scandit’s platform puts an enterprise-grade scanning solution in the pocket of every employee and customer without requiring legacy hardware. This bridge between the physical and digital worlds will be increasingly critical as the world accelerates its shift to online purchasing and delivery, distributed supply chains and cashierless retail.”

London-based femtech hardware startup Astinno has picked up an Innovate UK grant worth £360k ($450k) to fund further testing of a wearable it’s developing for women experiencing a perimenopause symptom known as hot flushes.

The sensor-packed device, which it’s calling Grace, is being designed to detect the onset of a hot flush and apply cooling to a woman’s wrist to combat the reaction — in a process it likens to running your wrists under a cold tap.

The aim is for algorithmically triggered cooling to be done in a timely enough manner to prevent hot flushes from running their usual unpleasant and uncomfortable course. While the bracelet wearable itself is being designed to look like a chunky piece of statement jewellery.

The femtech category in general has attracted an influx of funding in recent years, as venture capitalists slowly catch up to the opportunities available in products and services catering to women’s health issues.

But it’s fair to say menopause remains a still under-addresed segment within the category. Although there are now signs that more attention is being paid to issues that affect many hundreds of millions of middle aged (and some younger) women around the world.

The team working on Grace has built several prototypes to date, per founder Peter Astbury. He says some limited user tested has also been done. But they’ve yet to robustly prove efficacy of the core tech — hence taking grant funding for more advanced testing. At this stage of development there’s also no timeline for when a product might be brought to market.

Astinno and Morgan IAT, its commercial partner on the project, have been awarded the Innovate UK money via a publicly funded UK SMART grants scheme (the pair are getting match funding via the scheme, with the public body putting up 70% and Astinno and Morgan IAT funding the other 30% of their respective costs).

Loughborough University — Astbury’s alma mater — is also involved as a research party, and is being funded for 100% of its grant costs.

“Several prototypes have been created so far, mainly by myself having received electronics and design training as part of my degree at Loughborough University,” says Astbury. “Shortly after leaving university I also briefly worked with an electronics company who helped to refine some of the components within the Grace product.

“Morgan IAT has the crucial technical role of developing a number of prototypes in conjunction with Astinno. This includes both hardware and software development, building many more advanced prototypes that are being tested, refined and then tested again.

“We’re working with three researchers from Loughborough University which brings together industry leading expertise in menopause psychology and physiology. Based at the National Centre for Sports and Exercise Medicine, the researchers are using their fantastic lab facilities to test Grace, meaning that everything we’re doing is being validated by professional research. Once this step is complete, we’ll have more of an idea regarding product release time-frames.”

Astbury founded the startup last summer — but had begun work on the concept for Grace several years before, during his final year at Loughborough, back in 2016.

“As a member of Loughborough’s business incubator, ‘The Studio’, I was awarded an enterprise grant which helped to fund the business. I have also been putting my User Experience design skills and expertise to good use, contracting for start-ups and larger healthcare companies on a part-time basis to ‘bootstrap’ development,” he adds.

The idea for the wearable came after Astbury conducted user research by talking to women about their menopausal symptoms and hearing about their coping strategies for hot flushes and the night sweats that can be induced.

“A woman was telling me about her symptoms and how she coped with them until now. She would wake up ten to fifteen times each night due to her night sweats. Each time, she would go to the bathroom and run her wrists under cold water which helped the flush to subside. Looking into this method in more depth, it became clear that cooling an area of skin can indeed be extremely effective and there are lots of women that use this technique,” he explains.

“During a hot flush, your brain mistakenly thinks that you are becoming too warm and causes your body to lose heat. This results in sweating, a reddening of the skin and shortness of breath. The skin, however, acts like your body’s thermometer, passing information to your brain. By applying cooling to the skin at the right time, we’re harnessing the body’s natural temperature regulation system. The brain receives signals that you are cool and, in turn, the body reacts in a way that is directly opposite to a hot flush.”

“The real key to Grace is accurately and reliably pre-empting hot flushes (the automated nature of the bracelet) so that cooling can be applied at the earliest stage possible,” he adds. “We’re doing that using a specific line-up of sensor technology and algorithms all working together but I’m afraid the details of that can’t be disclosed publicly yet.”

Astbury says he was keen to get grant funding at this stage of product development to avoid dilution of the business, given VCs would require their chunk of equity.

“One of the best things about Innovate UK for a science-based start-up like Astinno is that it doesn’t contribute to the dilution of your business,” he notes. “By the end of a successful grant project, a company becomes a much more attractive investment from the perspective of both investors and the start-up. I have had discussions with multiple angels/VC’s and will maintain those relationships, however a grant was the best option for us at this stage.”

Samsung Electronics announced today that its blood pressure monitoring app for Galaxy Watches has been approved by South Korean regulators. Called the Samsung Health Monitor, the app will be available for the Galaxy Watch Active2 during the third quarter, at least in South Korea, and added to upcoming Galaxy Watch devices.

TechCrunch has contacted Samsung for more information on when the app, which uses the Galaxy Watch Active2’s advanced sensor technology, will be available in other markets.

It was cleared by South Korea’s Ministry of Food and Drug Safety for use as an over-the-counter, cuffless blood pressure monitoring app. The app first has to be calibrated with a traditional blood pressure cuff, then it monitors blood pressure through pulse wave analysis. Users need to recalibrate the app at least once every four weeks.

According to a recent report by IDC, in the last quarter of 2019, Samsung wearables ranked third in terms of shipments, behind Apple and Xiaomi, with volume driven by its Galaxy Active watches. Samsung has sought to differentiate its smartwatches with a focus on health and fitness monitoring, including sleep trackers.

 

If you find voice assistants frustratingly dumb you’re hardly alone. The much hyped promise of AI-driven vocal convenience very quickly falls through the cracks of robotic pedantry.

A smart AI that has to come back again (and sometimes again) to ask for extra input to execute your request can see especially dumb — when, for example, it doesn’t get that the most likely repair shop you’re asking about is not any one of them but the one you’re parked outside of right now.

Researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, working with Gierad Laput a machine learning engineer at Apple, have devised a demo software add-on for voice assistants that lets smartphone users boost the savvy of an on-device AI by giving it a helping hand — or rather a helping head.

The prototype system makes simultaneous use of a smartphone’s front and rear cameras to be able to locate the user’s head in physical space, and more specifically within the immediate surroundings — which are parsed to identify objects in the vicinity using computer vision technology.

The user is then able to use their head as a pointer to direct their gaze at whatever they’re talking about — i.e. ‘that garage’ — wordlessly filling in contextual gaps in the AI’s understanding in a way the researchers contend is more natural.

So, instead of needing to talk like a robot in order to tap the utility of a voice AI, you can sound a bit more, well, human. Asking stuff like ‘Siri, when does that Starbucks close?’ Or — in a retail setting — ‘are there other color options for that sofa?’ or asking for an instant a price comparison between ‘this chair and that one’. Or for a lamp to be added to your wish-list.

In a home/office scenario, the system could also let the user remotely control a variety of devices within their field of vision — without needing to be hyper specific about it. Instead they could just look towards the smart TV or thermostat and speak the required volume/temperature adjustment.

The team has put together a demo video (below) showing the prototype — which they’ve called WorldGaze — in action. “We use the iPhone’s front-facing camera to track the head in 3D, including its direction vector. Because the geometry of the front and back cameras are known, we can raycast the head vector into the world as seen by the rear-facing camera,” they explain in the video.

“This allows the user to intuitively define an object or region of interest using the head gaze. Voice assistants can then use this contextual information to make enquiries that are more precise and natural.”

In a research paper presenting the prototype they also suggest it could be used to “help to socialize mobile AR experiences, currently typified by people walking down the street looking down at their devices”.

Asked to expand on this, CMU researcher Chris Harrison told TechCrunch: “People are always walking and looking down at their phones, which isn’t very social. They aren’t engaging with other people, or even looking at the beautiful world around them. With something like WorldGaze, people can look out into the world, but still ask questions to their smartphone. If I’m walking down the street, I can inquire and listen about restaurant reviews or add things to my shopping list without having to look down at my phone. But the phone still has all the smarts. I don’t have to buy something extra or special.”

In the paper they note there is a long body of research related to tracking users’ gaze for interactive purposes — but a key aim of their work here was to develop “a functional, real-time prototype, constraining ourselves to hardware found on commodity smartphones”. (Although the rear camera’s field of view is one potential limitation they discuss, including suggesting a partial workaround for any hardware that falls short.)

“Although WorldGaze could be launched as a standalone application, we believe it is more likely for WorldGaze to be integrated as a background service that wakes upon a voice assistant trigger (e.g., “Hey Siri”),” they also write. “Although opening both cameras and performing computer vision processing is energy consumptive, the duty cycle would be so low as to not significantly impact battery life of today’s smartphones. It may even be that only a single frame is needed from both cameras, after which they can turn back off (WorldGaze startup time is 7 sec). Using bench equipment, we estimated power consumption at ~0.1 mWh per inquiry.”

Of course there’s still something a bit awkward about a human holding a screen up in front of their face and talking to it — but Harrison confirms the software could work just as easily hands-free on a pair of smart spectacles.

“Both are possible,” he told us. “We choose to focus on smartphones simply because everyone has one (and WorldGaze could literally be a software update), while almost no one has AR glasses (yet).  But the premise of using where you are looking to supercharge voice assistants applies to both.”

“Increasingly, AR glasses include sensors to track gaze location (e.g., Magic Leap, which uses it for focusing reasons), so in that case, one only needs outwards facing cameras,” he added.”

Taking a further leap it’s possible to imagine such a system being combined with facial recognition technology — to allow a smart spec-wearer to quietly tip their head and ask ‘who’s that?’ — assuming the necessary facial data was legally available in the AI’s memory banks.

Features such as “add to contacts” or “when did we last meet” could then be unlocked, to augment a networking or socializing experience. Although, at this point, the privacy implications of unleashing such a system into the real world look rather more challenging than stitching together the engineering. (See, for example, Apple banning Clearview AI’s app for violating its rules.)

“There would have to be a level of security and permissions to go along with this, and it’s not something we are contemplating right now, but it’s an interesting (and potentially scary idea),” agrees Harrison when we ask about such a possibility.

The team was due to present the research at ACM CHI — but the conference was canceled due to the coronavirus.

Augmented reality headset maker Magic Leap has struggled with the laws of physics and failed to get to market. Now it’s seeking an acquirer, but talks with Facebook and medical goods giant Johnson & Johnson led nowhere according to a new report from Bloomberg’s Ed Hammond.

After raising over $2 billion and being valued between $6 billion and $8 billion back when it still had momentum, Hammond writes that “Magic Leap could fetch more than $10 billion if it pursues a sale” according to his sources. That price seems ridiculous. It’s the kind of number a prideful company might strategically leak in hopes of drumming up acquisition interest, even at a lower price.

Startups have been getting their valuations chopped when they go public. The whole economy is hurting due to coronavirus. Augmented Reality seems less interesting than virtual reality with people avoiding public places. Getting people to strap used AR hardware to their face for demos seems like a tough sell for the forseeable future.

No one has proven a killer consumer use case for augmented reality eyewear that warrants an expensive and awkward-to-wear gadget. Our phones can already deliver plenty of AR’s value while letting you take selfies and do video chat that headsets can’t. My experiences with Magic Leap at Sundance Film Festival last year were laughably disappointing, with its clunky hardware, ghostly projections, and narrow field of view.

Apple and Facebook are throwing the enduring profits of iPhones and the News Feed into building a better consumer headset. Snapchat has built intermediary glasses since CEO Evan Spiegel thinks it will be a decade before AR headsets see mainstream adoption. AR rivals like Microsoft have better enterprise experience, connections, and distribution. Enterprise AR startup Daqri crashed and burned.

Magic Leap’s CEO said he wanted to sell 1 million of its $2300 headset in its first year, then projected it would sell 100,000 headsets, but only moved 6,000 in the first six months, according to a daming report from The Information’s Alex Heath. Alphabet CEO Sundar Pichai left Magic Leap’s board despite Google leading a $514 million funding round for the startup in 2014. Business Insider’s Steven Tweedie and Kevin Webb revealed CFO Scott Henry and SVP of creative strategy John Gaeta bailed in November. The company suffered dozens of layoffs. It lost a $500 million contract to Microsoft last year. The CEOs of Apple, Google, and Facebook visited Magic Leap headquarters in 2016 to explore an acquisition deal, but no offers emerged.

Is AR eyewear part of the future? Almost surely. And is this startup valuable? Certainly somewhat. But Magic Leap may prove to be too little too early for a company burning cash by the hundreds of millions in a market newly fixated on efficiency. A $10 billion price tag would require one of the world’s biggest corporations to believe Magic Leap has irreplicable talent and technology that will earn them a fortune in the somewhat distant future.

The fact that Facebook, which does not shy from tall acquisition prices, didn’t want to buy Magic Leap is telling. This isn’t a product with hundreds of millions of users or fast-ramping revenue. It’s a gamble on vision and timing that looks to be coming up snake eyes. It’s unclear when the startup would ever be able to deliver on its renderings of flying whales and living room dinosaurs in a form factor people actually want to wear.

 

One of Magic Leap’s early renderings of what it could supposedly do

With all their money and plenty of time before widespread demand for AR headsets materializes, potential acquirers could likely hire away the talent and make up the development time in cheaper ways than buying Magic Leap. If someone acquires them for too much, it feels like a write-off waiting to happen.

Fitness, wallpaper, and lost item-finding startups could have a big new competitor baked into everyone’s iPhones. Leaks of the code from iOS 14 that Apple is expected to reveal in June signal several new features and devices are on the way. Startups could be at risk due to Apple’s ability to integrate these additions at the iOS level, instantly gain an enormous install base, and offer them for free or cheap as long as they boost sales of its main money maker, the iPhone.

It’s unclear if all of these fresh finds which actually get official unveiling in June versus further down the line. But here’s a break down of what the iOS 14 code obtained by 9To5Mac’s Chance Miller shows and what startups could be impacted by Apple barging into their businesses:

Fitness – Codename: Seymour

Apple appears to be preparing a workout guide app for iOS, WatchOS, and Apple TV that would let users download instructional video clips for doing different exercises. The app could potentially be called Fit or Fitness, according to MacRumors‘ Juli Clover, and offer help with stretching, core training, strength training, running, cycling, rowing, outdoor walking, dance, and yoga. The Apple Watch appears to help track your progress through the workout routines.

Icons for Apple’s fitness feature from the iOS 14 code

The iOS Health app is already a popular way to track steps and other fitness goals. By using Health to personalize or promote a new Fitness feature, Apple has an easy path to a huge user base. Many people are afraid of weight and strength training because there’s a lot to learn about having proper form to avoid injury or embarassment. Visual guides with videos shot from multiple angles could make sure you’re doing those pushups or bicep curls correctly.

Apple’s entrance into fitness could endanger startups like Future, which offer customized work out routines with video clips demonstrating how to do each exercise. $11.5 million-funded Future actually sends you an Apple Watch with its $150 per month service to track your progress while using visuals, sounds, and vibrations to tell you when to switch exercises without having to look at your phone. By removing Future’s human personal trainers that text to nag you if you don’t work out, Apple could offer a simplified version of this startup’s app for free.

Apple Fitness could be even more trouble for less premium apps like Sweat and Sworkit that provide basic visual guidance for workouts, or Aaptiv that’s restricted to just audio cues. Hardware startups like Peloton, which offers off-bike Beyond The Ride workouts with live or on-demand class, and Tempo’s giant 3D-sensing in-home screen for weight lifting could also find casual customers picked off by a free or cheap alternative from Apple.

There’s no code indicating a payment mechanism so Apple Fitness could be free. But it’s also easy to imagine Apple layering on premium feature like remote personal training assistance from human experts or a wider array of exercises for a fee, tying into its increasing focus on services revenue.

Wallpapers – Access For Third-Parties

The iPhone’s current wallpaper selector

In iOS 14, it appears that Apple will offer new categorizations for wallpapers beyond the existing Dynamic (slowly shifting), Still, and Live (move when touched) options. Apple’s always only offered a few native wallpapers plus the option to pull one from your camera roll. But the iOS 14 code suggests Apple may open this up to third-party providers.

A wallpaper ‘store’ could be a both a blessing and a curse for entrepreneurs in the space. It could endanger sites and apps like Vellum, Unsplash, Clarity, WLPPR, and Walli that aggregate wallpapers for browsing, purchase, or download. Instead, Apple could make itself the ultimate aggregator by being built directly into the wallpaper settings. But for creators of beautiful wallpaper images, iOS 14 could potentially offer a new distribution method where their collections could be available straight from where users install their phone backgrounds.

The big question will be whether Apple merely works with a few providers to add in wallpaper packs for free, does financially-backed deals to bring in providers, or creates a full-blown marketplace for wallpapers where creators can sell their imagery like developers do apps. By turning this formerly free feature into a marketplace, Apple could also start earning a cut of sales to add to its services revenue.

AirTags – Find Your Stuff

Apple appears to be getting closer to launching its long-awaited AirTags, based on iOS 14 code snippets. These small tracking tags could be attached to your wallet, keys, gadgets, or other important or easily lost items, and then located using the iOS Find My app. AirTags may be powered by removable coin-shaped batteries, according to MacRumors.

Native integration with iOS could make AirTags super easy to set up. They could also benefit from the ubiquity of Apple devices, as the company could let the crowd help find your stuff by allowing AirTags to piggyback on the connectivity of any of its phones, tablets, or laptops to send you the missing item’s coordinates.

Most obviously, AirTags could become a powerful competitor to the vertical’s long-standing frontrunner Tile. The $104 million-funded startup sells $20 to $35 tracking tags that locate devices from 150 to 400 feet away. It also sells a $30 per year subscription for free battery replacements and 30 day location history. Other players in the space include Chipolo, Orbit, and MYNT.

But as we saw with the launch of AirPods, Apple’s design expertise and native iOS integrations can allow its products to leapfrog what’s in the market. If AirTags get proprietary access to the iPhone’s Bluetooth and other connectivity hardware, and if they’re quicker to set up, Apple fans might jump from startups to these new devices. Apple could also develop a similar premium subscription for battery or full AirTag replacements, as well as bonus tracking features.

Augmented Reality Scanning – Codename: Gobi

iOS 14 includes code for a new augmented reality feature that lets users scan places or potentially items in the real world to pull up helpful information. The code indicates Apple is testing the feature, codenamed Gobi, at Apple Stores and Starbucks to let users see product, pricing, and comparison info, according to 9To5Mac’s Benjamin Mayo. Gobi can recognize QR-style codes for specific locations like a certain shop, triggering a companion augmented reality experience.

It appears that an SDK would allow partners to build their own AR offerings and generate the QR codes that initiate them. Eventually, these capabilities could be extended from Apple’s mobile devices to the AR headset it’s working on so you’d instantly get a heads-up display of information when you entered the right place.

Apple moving to power lighter-weight AR experiences rather than just offering the AR Kit infrastructure for developers to build full-fledged apps could create competition for a range of startups and other tech giants. The whole point of augmented reality is that it’s convenient to explore hidden experiences in the real world, which is defeated if users have to know to download and then wait to install a different app for every place or product. Creating a central AR app for simpler experiences that load instantly could speed up adoption.

Snapchat’s Scan AR platform

Startups like Blippar have been working on AR scanning for years in hopes of making consumer packaged goods or retail locations come alive. But again, the need to download a separate app and remember to use it has kept these experiences out of the mainstream. Snapchat’s Scan platform can similarly trigger AR effects based on specific items from a more popular app. And teasers of Facebook and Google’s eventual augmented reality hardware and software hinge on adding utility to every day life.

If Apple can build this technology into everyone’s iPhone cameras, it could surmount one of AR’s biggest distribution challenges. That might help it build out a developer ecosystem and train customers to seek out AR so they’re all ready when its AR glasses finally arrive.