Steve Thomas - IT Consultant

It takes massive amounts of data to train AI models. But sometimes, that data simply isn’t available from real-world sources, so data scientists use synthetic data to make up for that. In machine vision applications, that means creating different environments and objects to train robots or self-driving cars, for example. But while there are quite a few tools out there to create virtual environments, there aren’t a lot of tools for creating virtual objects.

At its re:Mars conference, Amazon today announced synthetics in Sagemaker Ground Truth, a new feature for creating a virtually unlimited number of images of a given object in different positions and under different lighting conditions, as well as different proportions and other variations.

With WorldForge, the company already offers a tool to create synthetic scenes. “Instead of generating whole worlds for the robot to move around, this is specific to items or individual components,”  AWS VP of Engineering Bill Vass told me. He noted that the company itself needed a tool like this because even with the millions of packages that Amazon itself ships, it still didn’t have enough images to train a robot.

“What Ground Truth Synthetics does is you start with the 3D model in a number of different formats that you can pull it in and it’ll synthetically generate photorealistic images that match the resolution of the sensors you have,” he explained. And while some customers today purposely distress or break the physical parts of a machine, for example, to take pictures of them to train their models — which can quickly become quite expensive — they can now distress the virtual parts instead and do that millions of times if needed.

He cited the example of a customer who makes chicken nuggets. That customer used the tool to simulate lots of malformed chicken nuggets to train their model. 

Vass noted that Amazon is also partnering with 3D artists to help companies that may not have access to that kind of in-house talent to get started with this service, which uses the Unreal Engine by default, though it also supports Unity and the open-source Open 3D Engine. Using those engines, users can then also start simulating the physics of how those objects would behave in the real world, too.

Meta wants to make it clear it’s not giving up on high-end VR experiences yet. So, in a rare move, the company is spilling the beans on several VR headset prototypes at once. The goal, according to CEO Mark Zuckerberg, is to eventually craft something that could pass the “visual Turing Test,” or the point where virtual reality is practically indistinguishable from the real world. That’s the Holy Grail for VR enthusiasts, but for Meta’s critics, it’s another troubling sign that the company wants to own reality (even if Zuckerberg says he doesn’t want to completely own the metaverse).

As explained by Zuckerberg and Michael Abrash, Chief Scientist of Meta’s Reality Labs, creating the perfect VR headset involves perfecting four basic concepts. First, they need to reach a high resolution so you can have 20/20 VR vision (with no need for prescription glasses). Additionally, headsets need variable focal depth and eye tracking, so you can easily focus on nearby and far away objects; as well as, fix optical distortions inherent in current lenses. Finally, Meta needs to bring HDR, or high dynamic range, into headsets to deliver more realistic brightness, shadows and color depth. (More so than resolution, HDR is a major reason why modern TVs and computer monitors look better than LCDs from a decade ago.)

Meta Reality Labs VR headset prototypes
Meta

And of course, the company needs to wrap all of these concepts into a headset that’s light and easy to wear. In 2020, Facebook Reality Labs showed off a pair of concept VR glasses using holographic lenses , which looked like over-sized sunglasses. Building on that original concept, the company revealed Holocake 2 today (above), its thinnest VR headset yet. It looks more traditional than the original pair, but notably Zuckerberg says it’s a fully functional prototype that can play any VR game while tethered to a PC.

“Displays that match the full capacity of human vision are going to unlock some really important things,” Zuckerberg said in a media briefing. “The first is a realistic sense of presence, and that’s the feeling of being with someone or in some place as if you’re physically there. And given our focus on helping people connect, you can see why this is such a big deal.” He described testing photorealistic avatars in a mixed reality environment, where his VR companion looked like it was standing right beside him. While “presence” may seem like an esoteric term these days, it’s easier to understand once headsets can realistically connect you to remote friends, family and colleagues.

Meta’s upcoming Cambria headset appears to be a small step towards achieving true VR presence, the brief glimpses we’ve seen at its technology makes it seem like a small upgrade from the Oculus Quest 2. While admitting the perfect headset is far off, Zuckerberg showed off prototypes that demonstrated how much progress Meta’s Reality Labs has made so far.

Meta Reality Labs VR headset prototypes
Meta

There’s “Butterscotch” (above), which can display near retinal resolution, allowing you to read the bottom line of an eye test in VR. (Unfortunately, the Reality Labs engineers also had to cut the Quest 2’s field of view in half to achieve that.) The Starburst HDR prototype looks even wilder: It’s a bundle of wires, fans and other electronics that can produce up to 20,000 nits of brightness. That’s a huge leap from the Quest 2’s 100 nits, and it’s even leagues ahead of super-bright Mini-LED displays we’re seeing today. (My eyes are watering at the thought of putting that much light close to my face.) Starburst is too large and unwieldy to strap onto your head, so researchers have to peer into it like a pair of binoculars.

Editor’s note: This article originally appeared on Engadget.

The fifth generation of mobile networks, or 5G, is poised to revolutionize Voice over Internet Protocol (VoIP) for businesses. 5G improves call speeds, has greater capacity, and reduces latency, making VoIP calls clearer and more reliable than ever before. This will be a huge benefit for businesses that heavily rely on VoIP for communications. If you’re looking to stay ahead of the curve, make sure to invest in 5G-compatible VoIP hardware and software. Read on to learn how 5G can improve the quality of your VoIP calls.

Explore VR and AR

With 5G network speeds, virtual and augmented reality can become more common for many small- and medium-sized businesses (SMBs). 5G easily surpasses 4G’s Gbps (gigabits per second) limit, which is currently holding back businesses’ adoption of virtual reality (VR) and augmented reality (AR) applications.

VR and AR need to process significantly more data because of the visuals they must process as users move, and this puts an enormous strain on mobile networks. 5G is set to ensure a better user experience by facilitating smoother connections and preventing network delays that can affect your bottom line.

Improved video conferencing

One of the major hindrances to smooth web and video conferencing is slow network data transfer. Fortunately, innovations like Web Real-Time Communication and 5G networks can enhance VoIP for businesses by providing open and stable streaming as well as sufficient transfer speeds. These will allow businesses to view higher-quality videos, even those at 4K and 8K resolution.

Beyond improved streaming quality, 5G networks will also be able to support video calls with a larger number of participants, which is timely, considering the current shift to remote working.

Utilize mobile VoIP

VoIP calls heavily rely on sufficient download and upload speeds. For example, mobile VoIP users may experience unstable and poor call connectivity and clarity when their 4G networks are limited to 12 Mbps upload and 2 Mbps download speeds. These limitations could also lead to packet loss. Packet loss happens when one or more “packets” of data traveling across a computer network fail to reach their destination, which is typically caused by network congestion. Packet loss reduces audio/video quality and can even cause calls to be dropped, leading to a poor VoIP experience. But thanks to 5G’s greater speed, packet loss can be prevented.

Moreover, 4G network providers set a fixed amount of bandwidth for every direction it transmits a signal to. But unlike 4G, the 5G bandwidth can be adjusted on the fly. This means that 5G network providers can allocate bandwidth to mitigate congestion. In practical terms, businesses could reach their customers even if the latter are in crowded places that normally max out 4G mobile network capacity constraints, like in football stadiums or airports.

When your business decides to adopt the up-and-coming 5G network, you can expect to see significant VoIP improvements. If you’re looking to set up a VoIP system for your business, call or email us today.

Fifth generation technology, or 5G, will significantly enhance the performance of your business’s Voice over Internet Protocol (VoIP) systems. That’s because 5G networks have much higher speeds, greater capacity, and reduced latency than 4G LTE networks. This means that businesses can enjoy crystal-clear voice quality and low call latency no matter what device or where they make calls. Here are three ways to take advantage of 5G for your VoIP system.

Explore VR and AR

With 5G network speeds, virtual and augmented reality can become more common for many small- and medium-sized businesses (SMBs). 5G easily surpasses 4G’s Gbps (gigabits per second) limit, which is currently holding back businesses’ adoption of virtual reality (VR) and augmented reality (AR) applications.

VR and AR need to process significantly more data because of the visuals they must process as users move, and this puts an enormous strain on mobile networks. 5G is set to ensure a better user experience by facilitating smoother connections and preventing network delays that can affect your bottom line.

Improved video conferencing

One of the major hindrances to smooth web and video conferencing is slow network data transfer. Fortunately, innovations like Web Real-Time Communication and 5G networks can enhance VoIP for businesses by providing open and stable streaming as well as sufficient transfer speeds. These will allow businesses to view higher-quality videos, even those at 4K and 8K resolution.

Beyond improved streaming quality, 5G networks will also be able to support video calls with a larger number of participants, which is timely, considering the current shift to remote working.

Utilize mobile VoIP

VoIP calls heavily rely on sufficient download and upload speeds. For example, mobile VoIP users may experience unstable and poor call connectivity and clarity when their 4G networks are limited to 12 Mbps upload and 2 Mbps download speeds. These limitations could also lead to packet loss. Packet loss happens when one or more “packets” of data traveling across a computer network fail to reach their destination, which is typically caused by network congestion. Packet loss reduces audio/video quality and can even cause calls to be dropped, leading to a poor VoIP experience. But thanks to 5G’s greater speed, packet loss can be prevented.

Moreover, 4G network providers set a fixed amount of bandwidth for every direction it transmits a signal to. But unlike 4G, the 5G bandwidth can be adjusted on the fly. This means that 5G network providers can allocate bandwidth to mitigate congestion. In practical terms, businesses could reach their customers even if the latter are in crowded places that normally max out 4G mobile network capacity constraints, like in football stadiums or airports.

When your business decides to adopt the up-and-coming 5G network, you can expect to see significant VoIP improvements. If you’re looking to set up a VoIP system for your business, call or email us today.

5G is the next generation of wireless technology, and it is set to revolutionize the way we use the internet mainly because of its vastly improved speed. This improved speed, as well as 5G’s reduced latency, make 5G the ideal technology for Voice over Internet Protocol (VoIP) applications. Given all these benefits, businesses should definitely consider switching to 5G. Here are the top three ways that 5G can improve your business’s VoIP service.

Explore VR and AR

With 5G network speeds, virtual and augmented reality can become more common for many small- and medium-sized businesses (SMBs). 5G easily surpasses 4G’s Gbps (gigabits per second) limit, which is currently holding back businesses’ adoption of virtual reality (VR) and augmented reality (AR) applications.

VR and AR need to process significantly more data because of the visuals they must process as users move, and this puts an enormous strain on mobile networks. 5G is set to ensure a better user experience by facilitating smoother connections and preventing network delays that can affect your bottom line.

Improved video conferencing

One of the major hindrances to smooth web and video conferencing is slow network data transfer. Fortunately, innovations like Web Real-Time Communication and 5G networks can enhance VoIP for businesses by providing open and stable streaming as well as sufficient transfer speeds. These will allow businesses to view higher-quality videos, even those at 4K and 8K resolution.

Beyond improved streaming quality, 5G networks will also be able to support video calls with a larger number of participants, which is timely, considering the current shift to remote working.

Utilize mobile VoIP

VoIP calls heavily rely on sufficient download and upload speeds. For example, mobile VoIP users may experience unstable and poor call connectivity and clarity when their 4G networks are limited to 12 Mbps upload and 2 Mbps download speeds. These limitations could also lead to packet loss. Packet loss happens when one or more “packets” of data traveling across a computer network fail to reach their destination, which is typically caused by network congestion. Packet loss reduces audio/video quality and can even cause calls to be dropped, leading to a poor VoIP experience. But thanks to 5G’s greater speed, packet loss can be prevented.

Moreover, 4G network providers set a fixed amount of bandwidth for every direction it transmits a signal to. But unlike 4G, the 5G bandwidth can be adjusted on the fly. This means that 5G network providers can allocate bandwidth to mitigate congestion. In practical terms, businesses could reach their customers even if the latter are in crowded places that normally max out 4G mobile network capacity constraints, like in football stadiums or airports.

When your business decides to adopt the up-and-coming 5G network, you can expect to see significant VoIP improvements. If you’re looking to set up a VoIP system for your business, call or email us today.

Meta’s recently crowned president of global affairs, Nick Clegg — who, in a former life, was literally the deputy prime minister of the U.K. — has been earning his keep in California by penning an approximately 8,000-word manifesto to promo “the metaverse”: aka, the sci-fi-inspired vapourware the company we all know as Facebook fixed on for a major rebranding last fall.

Back then, founder and CEO Mark Zuckerberg, pronounced that the new entity (Meta) would be a “metaverse-first” company “from now on”. So it’s kinda funny that the key question Clegg says he’s addressing in his essay is “what is the metaverse” — and, basically, why should anyone care? But trying to explain such core logic is apparently keeping Meta’s metamates plenty busy.

The Medium post Clegg published yesterday warns readers it will require 32 minutes of their lives to take in. So few people may have cared to read it. As a Brit, I can assure you, no one should feel obliged to submit to 32 minutes of Nick Clegg — especially not bloviating at his employer’s behest. So TechCrunch took that bullet for the team and read (ok, skim-read) the screed so you don’t have to.

What follows is our bullet-pointed digest of Clegg’s metaverse manifesto. But first we invite you to chew over this WordCloud (below), which condenses his ~7,900-word essay down to 50 — most boldly featuring the word “metaverse” orbiting “internet”, thereby grounding the essay firmly in our existing digital ecosystem.

Glad we could jettison a few thousand words to arrive at that first base. But, wait, there’s more!

Image credits: Natasha Lomas/TechCrunch

Fun found word pairs that leap out of the CleggCloud include “companies rules” (not democratic rules then Clegg?); “people technologies” (possibly just an oxymoron; but we’re open to the possibility that it’s a euphemistic catch-all for ill-fated startups like HBO’s Silicon Valley‘s (satirical) ‘Human Heater’); “around potential” (not actual potential then?); “meta physical” (we lol’d); and — squint or you’ll miss it! — “privacy possible” (or possibly “possible privacy”).

The extremely faint ink for that latter pairing adds a fitting layer of additional uncertainty that life in the Zuckerberg-Clegg metaverse will be anything other than truly horrific for privacy. (Keen eyed readers may feel obligated to point out that the CleggCloud also contains “private experience” as another exceptionally faint pairing. Albeit, having inhaled the full Clegg screed, we can confirm he’s envisaging “private experience” in exceptional, siloed, close-friend spaces — not that the entire metaverse will be a paradise for human privacy. Lol!)

Before we move on to the digest, we feel it’s also worth noting a couple of words that aren’t used in Clegg’s essay — and so can only be ‘invisibly inked’ on our wordcloud (much like a tracking pixel) — deserving a mention by merit of their omission: Namely, “tracking” and “profiling”; aka, how advertising giant Meta makes its money now. Because, we must assume, tracking and profiling is how Meta plans to make its money in the mixed reality future Clegg is trying to flog.

His essay doesn’t spare any words on how Meta plans to monetize its cash-burning ‘pivot’ or reconfigure the current “we sell ads” business model in the theoretical, mixed reality future scenario he’s sketching, where the digital commerce playground is comprised of a mesh of interconnecting services owned and operated by scores of different/competing companies.

But perhaps — and we’re speculating wildly here — Meta is envisaging being able to supplement selling surveillance-targeted ads by collecting display-rents from the cottage industry of “creators” Clegg & co. hope will spring up to serve these spaces by making digital items to sell users, such as virtual threads for their avatars, or virtual fitting rooms to buy real threads… (‘That’s a nice ‘Bored Ape T-Shirt’ you’re planning to sell — great job! — but if you want metamates to be able to see it in full glorious color you’ll want to pay our advanced display fees’, type thing. Just a thought!)

Now onwards to our digest of Clegg’s screed — which we’ve filleted into a series of bulleted assertions/suggestions being made by the Meta president (adding our commentary alongside in bold-italics). Enjoy how much time we’ve saved you.

  • There won’t be ‘a’ or ‘the metaverse’, in the sense of a single experience/owned entity; there will be “metaverse spaces” across different devices, which may — or may not — interoperate nicely [so it’s a giant rebranding exercise of existing techs like VR, AR, social gaming etc?] 
  • But the grand vision is “a universal, virtual layer that everyone can experience on top of today’s physical world” [aka total intermediation of human interaction and the complete destruction of privacy and intimacy in service of creating limitless, real-time commercial opportunities and enhanced data capture]
  • Metaverse spaces will over index on ephemerality, embodiment and immersion and be more likely to centre speech-based communication vs current social apps, which suggests users may act more candid and/or forget they’re not actually alone with their buddies [so Meta and any other mega corporates providing “metaverse spaces” can listen in to less guarded digital chatter and analyze avatar and/or actual body language to derive richer emotional profiles for selling stuff] 
  • The metaverse could be useful for education and training [despite the essay’s headline claim to answer “why it matters”, Clegg doesn’t actually make much of a case for the point of the metaverse or why anyone would actually want to fritter their time away in a heavily surveilled virtual shopping mall — but he includes some vague suggestions it’ll be useful for things like education or healthcare training. At one one point he enthuses that the metaverse will “make learning more active” — which implies he was hiding under a rock during pandemic school shutdowns. He also suggests metaverse tech will remove limits on learning related to geographical location — to which one might respond have you heard of books? Or the Internet? etc]
  • The metaverse will create new digital divides — given those who can afford the best hardware will get the most immersive experience [not a very equally distributed future then is it Clegg?] 
  • It’s anyone’s guess how much money the metaverse might generate — or how many jobs it could create! [🤷]
  • But! Staggeringly vast amounts of labor will be required to sustain these interconnected metaverse spaces [i.e. to maintain any kind of suspension of disbelief that it’s worth the time sink and to prevent them from being flooded with toxicity]
  • Developers especially there will be so much work for you!!! [developers, developers, developers!]
  • Unlike Facebook, there won’t be one set of rules for the metaverse — it’s going to be a patchwork of ToS [aka, it’ll be a confusing mess. Plus governments/states may also be doing some of the rule-making via regulation]
  • A lack of interoperability/playing nice between any commercial entities that build “metaverse experiences” could fatally fragment the seamless connectivity Meta is so keen on [seems inevitable tbh; thereby threatening the entire Meta rebranding project. Immersive walled gardens anyone?]
  • Meta’s metaverse might let you create temporary, siloed private spaces where you can talk with friends [but only in the same siloed way that FB Messenger offers E2EE via “Secret Conversations” — i.e. surveillance remains Meta’s overarching rule]
  • Bad metaverse experiences will probably be even more horrible than 2D-based cyberbullying etc [yep, virtual sexual assault is already a thing]
  • There are big challenges and uncertainties ahead for Meta [no shit]
  • It’s going to take at least 10-15 years for anything resembling Meta’s idea of connected metaverse/s to be built [Clegg actually specified: “if not longer”; imagine entire decades of Zuckerberg-Clegg!]
  • Meta hopes to work with all sorts of stakeholders as it develops metaverse technologies [aka, it needs massive buy-in if there’s to be a snowflake’s chance in hell of pulling off this rebranding pivot and not just sinking billions into a metaverse money-hole]
  • Meta names a few “priority areas” it says are guiding its metaverse development — topped by “economic opportunity” [just think of all those developer/creator jobs again! Just don’t forget who’s making the mega profits right now… All four listed priorities offer more PR soundbite than substance. For example, on “privacy” — another of Meta’s stated priorities — Clegg writes: “how we can build meaningful transparency and control into our products”. Which is a truly rhetorical ask from the former politician, since Facebook does not give users meaningful control over their privacy now — so we must assume Meta is planning a future of more of the same old abusive manipulations and dark patterns so it can extract as much of people’s data as it can get away with… Ditto “safety & integrity” and “equity & inclusion” under the current FB playbook.] 
  • “The metaverse is coming, one way or another” [Clegg’s concluding remark comes across as more of a threat than bold futuregazing. Either way, it certainly augurs Meta burning A LOT more money on this circus]

This year at TC Sessions: Mobility 2022, we’ll be chatting with Holoride co-founder and CEO Nils Wollny. The company is set to start shipping its in-auto VR experience in production Audi cars and SUVs this year, and Wollny will be able to provide us with more details about that pending launch.

Wollny will also be offering more insight into Holoride delving into the world of crypto, and developing its own utility token for its virtual experiences. The company has put a lot of thought into its business model, and has been very explicit about its intent to not pursue an ad-supported revenue plan. Wollny will talk about how the crypto plans for Holoride work relative to that core commitment and the business overall.

We’ll also talk on the changing environment for VR in general, including the advent of “The Metaverse,” as well as rules that could pave the way for people in self-driving cars to legally be able to consume entertainment content on the road while in motion, even with no driver at the wheel.

Virtual reality has come a long way even in just the few short years since Audi spun out Holoride and it made its debut as an independent company at CES in 2019. Now, on the verge of its production vehicle debut, Wollny will give us a glimpse into what kind of future the startup is about to deliver.

TC Sessions: Mobility 2022 breaks through the hype and goes beyond the headlines to discover how merging technology and transportation will affect a broad swath of industries, cities and the people who work and live in them. Register today before prices increase May15!

Synthetically generated versions of real people that can be can be programmed to say anything sounds like a scenario from the latest episode of “Black Mirror.”

But in fact, production-grade video-based characters based on real people — which can talk about any product or subject at all, in a hyperlifelike manner — are arguably going to be part of the next wave in areas like e-commerce and remote learning. Further, a Hollywood celebrity could simply license out their avatar to explain products, at a scale that would make it impossible to physically film. But perhaps more realistically, “digital twins” like this make much more convincing videos than invented characters, because of their humanlike qualities.

The market for this technology is expanding. Key players in the space include SoulMachines (which has raised $135 million) and Synthesia (raised $66.6 million).

Back in 2020 we reported how Hour One, a New York and Tel Aviv startup which creates AI-driven synthetic characters based on real humans, had closed a $5 million seed funding round.

It’s now raised a $20 million Series A funding round led by Insight Partners. Also participating in the round was Galaxy Interactive, Remagine Ventures, Kindred Ventures, Semble Ventures, Cerca Partners, Digital-Horizon and Eynat Guez.

The startup plans to expand its Reals platform, a self-service platform allowing businesses “to create human-led video automatically, from just text, in a matter of minutes” said the company in a statement.

This, says the firm, converts people into virtual human characters for commercial and professional use cases. The human is first captured on video, then Hour One’s AI generates a virtual twin. This could be a virtual receptionist, salesperson, HR representative or language teacher, for example.

To some extent, Hour Pen’s view that the shift to remote work has meant video and more immersive media — such as for educational content — has become much more important, is correct. Therefore this kind of video people will be expanded.

Hour One CEO and Founder Oren Aharon said in a statement: “Very soon, any person will be able to have a virtual twin for professional use that can deliver content on their behalf, speak in any language, and scale their productivity in ways previously unimaginable.”

“The power and accuracy of generative AI continues to improve at an extremely rapid pace, and Hour One is at the vanguard,” added Lonne Jaffe, managing director at Insight Partners. “You just type in some text, and behind the scenes the incredibly scalable Hour One infrastructure creates a fluid and realistic video of an avatar talking along with matching voice and graphics. The team’s grand vision is to be able to embed this extraordinary capability within any software product or allow it to be invoked in real-time via API.”

Berlitz, the language and culture training giant, now uses Hour One to generate video, featuring virtual instructors across thousands of its videos. Hour One has also partnered with NBCUniversal, DreamWorks and Cameo, the latter of which allows celebrities to record paid videos for fans.

The appearance of the likes of SoulMachines, Synthesia and Hour One raises questions about how this technology might well also be abused. Watermarking videos as “artificial” might be one way to prevent this, but we are still swimming in uncharted waters here. Hour One says it has an ethical policy code for how its technology is used.

We are definitely going to see some “interesting” scenarios appear around this technology, which is proliferating much faster than the startups themselves.

Platforms like Figma have changed the game when it comes to how creatives and other stakeholders in the production and product team conceive and iterate around two-dimensional designs. Now, a company called Gravity Sketch has taken that concept into 3D, leveraging tools like virtual reality headsets to let designers and others dive into and better visualize a product’s design as it’s being made; and the London-based startup is today announcing $33 million in funding to take its own business to the next dimension.

The Series A is coming as Gravity Sketch passes 100,000 users, including product design teams at firms like Adidas, Reebok, Volkswagen and Ford.

The funding will be used to continue expanding the functionality of its platform, with special attention going to expanding LandingPad, a collaboration feature it has built to support “non-designer” stakeholders to be able to see and provide feedback on the design process earlier in the development cycle of a product.

The round is being led by Accel, with GV (formerly known as Google Ventures) and previous backers Kindred Capital, Point Nine and Forward Partners (all from its seed round in 2020) also participating, along with unnamed individual investors. The company has now raised over $40 million.

Co-founded Oluwaseyi Sosanya (CEO), Daniela Paredes Fuentes (CXO) and Daniel Thomas (CTO), Sosanya and Fuentes met when they were both doing a joint design/engineering degree across the Royal College of Art and Imperial College in London. They also went on to work together in industrial design at Jaguar Land Rover. Across those and other experiences, the two found that they were encountering the same problems in the process of doing their jobs.

Much design in its earliest stages is often still sketched by hand, Sosanya noted, “but machines for tooling run on digital files.” That is just one of the steps when something is lost or complicated in translation: “From sketches to digital files is a very arduous process,” he said, involving perhaps seven or eight versions of the same drawing. Then technical drawings need to be produced, and then modeling for production, all complicated by the fact that the object is three-dimensional.

“There were so many inefficiencies, that the end result was never that close to the original intent,” he said. It wasn’t just design teams being involved, either, but marketing and manufacturing and finance and executive teams as well.

One issue is that we think in 3D, but skills need to be learned, and most digital drawing is designed to cater to, translating that into a 2D surface. “People sketch to bring ideas into the world, but the problem is that people need to learn to sketch, and that leads to a lot of miscommunication,” Paredes Fuentes added.

Even sketches that a designer makes may not be true to the original idea. “Communications and conversations were happening too late in the process,” she said. The idea, she noted, is to bring in collaboration earlier so that input and potential changes can be snagged earlier, too, making the whole design and manufacturing process less expensive overall.

Gravity Sketch’s solution is a platform that tapped into innovations in computer vision and augmented and virtual reality to let teams of people collaborate and work together in 3D from day one.

The approach that Gravity Sketch takes is to be “agnostic” in its approach, Sosanya said, meaning that it can be used from first sketch through to manufacturing; or files can be imported from it into whatever tooling software a company happens to be using; or designs might not go into a physical realm at any point at all: more recently, designers have been building NFT objects on Gravity Sketch.

One thing that it’s not doing is providing stress tests or engineering calculations, instead making the platform as limitless as possible as an engine for creativity. Bringing that too soon into the process would be “forcing boundaries,” Sosanya said. “We want to be as unrestricted as a piece of paper, but in the third dimension. We feed in engineering tools but that comes after you’ve proposed a solution.”

Although there are plenty of design software makers in the market today, there’s been relatively little built to address what Paredes Fuentes described as “spatial thinkers,” and so although companies like Adobe have made acquisitions like Allegorithmic to bring in 3D expertise, it has yet to bring out a 3D design engine.

“It’s highly difficult to build a geometry engine from the ground up,” Sosanya said. “A lot haven’t dared to step in because it’s a very complex space because of the 3D aspect. The tech enables a lot of things but taking the approach we have is what has brought us success.” That approach is not just to make it possible to “step into” the design process from the start through a 3D virtual reality environment (it provides apps for iOS, Steam and Oculus Quest and Rift), but also to use computers and smartphones to collaborate together as well.

While a lot of the target is to bring tools to the commercial world, Gravity Sketch has also found traction in education, with around 170 schools and universities also using the platform to complement their own programs. It said that revenues in the last year have grown four-fold, although it doesn’t disclose actual revenue numbers. Some 70% of its customers are in the U.S.

The investment will be used to continue developing Gravity Sketch’s LandingPad collaboration features to better support the non-designer stakeholders essential to the design process — a reflection of Gravity Sketch’s belief that greater diversity in the design industry and more voices in the development process will result in better performing products on the market. Companies – including the likes of Miro and Figma – have already disrupted the 2D space, enabling teams to co-create and collaborate quickly and inclusively in online workspaces, and now Gravity Sketch’s inclusive features are set to shake up the 3D environment. The funds will also be used to enhance the platform’s creative tools and scale the company’s sales, customer success and onboarding teams.

“In today’s climate, online collaboration tools have emerged as a necessity for businesses that want to stay agile and connect their teams in the most interactive, authentic and productive way possible,” said Harry Nelis, a partner at Accel, in a statement. “Design is no different, and we’ve been blown away by Gravity Sketch’s innovative, forward-looking suite of collaboration design tools that are already revolutionising workflows across numerous industries. Moreover, we expect that 3D design – coupled with the advent of virtual reality – will only grow in importance as brands race to build the emerging metaverse. The early organic traction and tier one brands that Oluwaseyi, Daniela and the Gravity Sketch team have already secured as customers are extremely impressive. We’re excited to partner with them and help them realise their dream of a more efficient, sustainable and democratic design world.”

Major League Baseball may have started in the 19th century and come of age in the 20th, but it is definitely no stranger to technology, whether it’s the cloud for storing and analyzing troves of data or figuring out how to customize and enhance fans’ experience.

To do all of that, MLB uses a range of technology from customized video search and Statcast for advanced statistics to streaming and mobile apps and games for its fans. The league is already welcoming NFTs and looking at AR and VR as it tries to take advantage of whatever tech is out there that makes sense for baseball.

I spoke to Vasanth Williams, the league’s head of engineering and chief product officer, to get a better sense of the technology being adopted and how it’s used.

Williams said baseball has its fingers in so many tech pies that there’s no such thing as a typical day for him.

“It’s hard to have a typical day, because the breadth of our portfolio products is quite large. But overall, the biggest priority for me is to drive fan engagement — leveraging all the new technologies and the data we have to help not just understand the game itself better or the data that we generate, but also create new and interesting experiences for fans in ways they can better connect to baseball, and also the community at large,” he said.

Williams was hired by MLB after stints at Microsoft, Facebook and Amazon, so he understands Big Tech and said he saw a chance to work in a place that is constantly trying to innovate and take advantage of available technology.

“MLB has a long history of leveraging data and technology, and being an early adopter of a lot of the technologies, which I love doing. I’m happy to join the journey to continue that and push the envelope in sports technology as a whole,” he said.

MLB's Film Room lets you find clips from games with extremely granular search tools.

MLB FilmRoom lets you search for video footage from across baseball. Image Credits: MLB

MLB briefly worked with AWS to build its cloud stack, but it has now gone all-in with Google Cloud. The league is now building a platform for creating applications, which individual teams can also take advantage of.

After reports that women were already being groped and sexually harassed in Meta’s new VR spaces, Horizon Worlds and Venues, the company formerly known as Facebook last month rolled out a new “Personal Boundary” feature that created a bubble of space with a radius of two virtual feet around each avatar. This prevented avatars from getting within roughly four feet of one another. Today, Meta is customizing this feature by allowing users to optionally turn the setting off, or control when it’s enabled.

Instead of making the boundary default to on for all Horizon Worlds experiences, Meta said today it will allow users to choose whether or not they want the setting enabled for all interactions. Now, VR users will be able to turn their 4-foot Personal Boundary off, as was the standard prior to the feature’s launch. There is still a small personal boundary to prevent unwanted interactions, the company says — but this was not enough in the past to prevent bad actors from simulating rape in Meta’s virtual worlds, we should note.

Users will also be able to turn the Personal Boundary on for non-Friends only, which would enable the extra safety feature when you’re with people you don’t know, but leave it off when you’re virtually hanging out with people on your friends list. You can also choose to keep the Personal Boundary enabled for all experiences, as before.

However, Meta says it’s adjusting the default setting to keep the Personal Boundary on for non-Friends only, which means it’s dialing back the safety feature a bit. Given that Horizon Worlds is a new social network, people may be friending other users they don’t know in real life after meeting them in the virtual space. That means a user’s friends list may not be quite the same as a list of people the user explicitly trusts. So some caution should still be advised here.

Image Credits: Meta

Meta claims the changes were made based on community feedback after February’s rollout of the Personal Boundary feature. The company believes the new options will make it easier for people to high-five, fist-bump and take selfies with other avatars in Horizon Worlds.

In addition, Meta says the Personal Boundary will default to the more restrictive setting when two people meet for the first time. For example, if one person’s Personal Boundary is off but the other person’s is set to On for Everyone, then the platform will establish a 4-foot space between both people. And it says the Personal Boundary will now default to on at roughly 4 feet for everyone participating in its live events VR experience Horizon Venues.

In its announcement about the changes, Meta acknowledged that developing for VR represents “what are perhaps some of the hardest challenges we’ve tackled in a generation of computing now that we’re no longer limited by fixed viewpoints and traditional flatscreen devices.”

But this statement seems to throw the blame for its earlier failures to protect women in its VR space solely on the fact that building for VR worlds is something new and, therefore, some trial and error will be involved. But had Meta sought the input of more women engineers or gamers to begin with, it’s hard to imagine this topic wouldn’t have come up. After all, sexual assault in virtual spaces is something that’s happened before, repeatedly — including in other virtual reality games, in VR precursors like Second Life, and even in a children’s virtual game on Roblox. It’s unbelievable that the comapny would not have considered built-in protections when designing a new VR environment. It also shows Facebook’s tendency to design for growth and scale first and user safety second is also carrying over to its new projects, like Horizon Worlds.

The company says it will continue to iterate and make improvements as it learns more about how Personal Boundary impacts the VR experience.

After reports that women were already being groped and sexually harassed in Meta’s new VR spaces, Horizon Worlds and Venues, the company formerly known as Facebook last month rolled out a new “Personal Boundary” feature that created a bubble of space with a radius of two virtual feet around each avatar. This prevented avatars from getting within roughly four feet of one another. Today, Meta is customizing this feature by allowing users to optionally turn the setting off, or control when it’s enabled.

Instead of making the boundary default to on for all Horizon Worlds experiences, Meta said today it will allow users to choose whether or not they want the setting enabled for all interactions. Now, VR users will be able to turn their 4-foot Personal Boundary off, as was the standard prior to the feature’s launch. There is still a small personal boundary to prevent unwanted interactions, the company says — but this was not enough in the past to prevent bad actors from simulating rape in Meta’s virtual worlds, we should note.

Users will also be able to turn the Personal Boundary on for non-Friends only, which would enable the extra safety feature when you’re with people you don’t know, but leave it off when you’re virtually hanging out with people on your friends list. You can also choose to keep the Personal Boundary enabled for all experiences, as before.

However, Meta says it’s adjusting the default setting to keep the Personal Boundary on for non-Friends only, which means it’s dialing back the safety feature a bit. Given that Horizon Worlds is a new social network, people may be friending other users they don’t know in real life after meeting them in the virtual space. That means a user’s friends list may not be quite the same as a list of people the user explicitly trusts. So some caution should still be advised here.

Image Credits: Meta

Meta claims the changes were made based on community feedback after February’s rollout of the Personal Boundary feature. The company believes the new options will make it easier for people to high-five, fist-bump and take selfies with other avatars in Horizon Worlds.

In addition, Meta says the Personal Boundary will default to the more restrictive setting when two people meet for the first time. For example, if one person’s Personal Boundary is off but the other person’s is set to On for Everyone, then the platform will establish a 4-foot space between both people. And it says the Personal Boundary will now default to on at roughly 4 feet for everyone participating in its live events VR experience Horizon Venues.

In its announcement about the changes, Meta acknowledged that developing for VR represents “what are perhaps some of the hardest challenges we’ve tackled in a generation of computing now that we’re no longer limited by fixed viewpoints and traditional flatscreen devices.”

But this statement seems to throw the blame for its earlier failures to protect women in its VR space solely on the fact that building for VR worlds is something new and, therefore, some trial and error will be involved. But had Meta sought the input of more women engineers or gamers to begin with, it’s hard to imagine this topic wouldn’t have come up. After all, sexual assault in virtual spaces is something that’s happened before, repeatedly — including in other virtual reality games, in VR precursors like Second Life, and even in a children’s virtual game on Roblox. It’s unbelievable that the comapny would not have considered built-in protections when designing a new VR environment. It also shows Facebook’s tendency to design for growth and scale first and user safety second is also carrying over to its new projects, like Horizon Worlds.

The company says it will continue to iterate and make improvements as it learns more about how Personal Boundary impacts the VR experience.