Google's Poly API: Access 3D Objects for VR/AR Apps
December 13th, 2017
Since Pokemon Go was launched, VR/AR technology remained a hype in the market. Not just for games, developers are employing VR/AR technology for enhancing the app user experience also.
According to [Digi Capital Augmented/Virtual Reality Report 2017](https://www.digi-capital.com/reports/), VR/AR market will hit $150 billion revenue by 2020, with AR having a share of about $120 billion and VR of $25 billion. Moreover, Mobile VR/AR will be the fundamental driver of this market revenue. This has skyrocketed the demand for AR/VR-based mobile apps, which has eventually increased the demand for 3D models and scene required for providing the realistic AR/VR environments. To cope up with this, Google has been continuously offering a wide number of 3D object creation tools.
Just a month ago, Google launched a platform for browsing, downloading and uploading 3D object, called Poly. And now, has come up with Poly API, which enables the developers to easily integrate 3D assets into their games and apps. Even after being awfully new in the app industry, world’s renowned [mobile app developers](https://appinventiv.com/mobile-application-development) are using the API for browsing and downloading 3D objects on demand for their VR/AR app projects.
<Image src="/uploads/Image.png" alt="Image" width="559" height="232" />
*Interested to know about Poly?* Let’s explore together.
[Poly](https://poly.google.com/) is an online warehouse of 3D objects and scenes created using Blocks, Tilt Brush, or any other such 3D program. The platform allows developers to access the objects for free and remix them under the ‘CC By’ license, i.e., until the credit is given to the creator.
The assets are available under a wide range of category and can be filtered as per the requirement. In other words, one can employ Poly API to:
* Sort assets on the basis of the following filters:
* Category (Technology, People, Creature, etc.)
* Asset type (Tilt Brush, Blocks, etc.)
* Complexity (Low, Medium, High)
* Get user’s own models
* Get objects user liked
* Get a particular asset by ID
* Download assets as per the asset type (GLTF1, GLTF2, OBJ)
* Download material files and textures from 3D models and scenes.
* Fetch asset metadata (author, title, creation time, description, license, etc.)
* Get access to thumbnails for assets.
The 3D model and scene can be employed for developing Android, Web and iOS [augmented reality apps](http://www.gummicube.com/2017/06/top-5-mobile-app-development-trends/), as well as linked with Unity and Unreal game engine. The developers can consolidate the available 3D assets into their project either at edit-time or at runtime. Edit-time refers to manually downloading models from Poly, collecting them at a particular location, and importing in app’s project later. This is the right option in case you wish to quickly add fixed assets. Whereas, Runtime means downloading 3D objects when the application is running. This lets the developers explore the Poly library and integrate the 3D assets at the same time.
*Curious to know how to use Poly API with iOS, Android or Web app?*
**For Android App**
For integrating Poly API with Android mobile apps, Google team has provided the [Android sample code](https://github.com/googlevr/poly-sample-android), consisting of a fundamental sample with no external dependencies, and another sample showing how to integrate Poly API with ARCore.
The sample basically describes how to make asynchronous HTTP connections to the Poly API, downloading of 3D asset files, conversion of OBJ and MTL files to OpenGL- compatible IBOs and VBOs, dynamically integrate the objects downloaded from Poly with ARCore, and so on.
**For iOS App**
For iOS app developers, Google team has [two samples](https://github.com/googlevr/poly-sample-ios) (one with SceneKit and one with ARKit) to show how to build a ‘top-grossing’ iOS application and import assets from Poly. The samples describe all the logic required to open an HTTP connection, generate an API request, parse the results, create 3D models from the data and put them on the scene.
**For Web App**
Google team has offered a complete [WebGL sample using Three.js](https://github.com/googlevr/poly-sample-web), in which they have shown how to acquire and display a specific model or execute searches. Besides this, there is also a sample demonstrating how to get and present Tilt Brush sketches.
Besides this, Google has also offered Poly Toolkit for Unreal as well as Poly ToolKit for Unity game engine, using which they can easily wrap the APIs, download and import models from Poly platform, and much more. In a nutshell, Poly API is a great help from Google to all those game and mobile app developers who wish to deliver high-quality AR/VR experience to the users and thus, increase engagement and drive high revenue.
Posted on May 17th, 2023
Running an app and running a business isn’t exactly the same, but there are enough similarities that you can use the same principles for both. In fact, you should run your app as a business - with all the forethought that it takes to plan, execute, and implement a business plan.
Posted on June 3rd, 2022
This is a guest post from EMBIQ, a software development company with a wide range of competencies from project and construct hardware - dedicated to the needs of mobile app dev, IoT (Smart City, Smart Metering, Smart Industry, Smart Agriculture), Asset Tracking, Indoor and Outdoor positioning systems concepts.
Posted on October 22nd, 2020
2020 turned out to be a difficult year for the vast majority of the inhabitants of our planet. Despite this, people were able to take some developments to a whole new level, which gives mobile app developers and app marketers a lot of room for action next year.