Guide

Content API Implementation Guide

A technical deep dive into building automated product content pipelines with 3D models, templates, and the Glossi Content API.

1The 3D Model as Source of Truth

Every piece of product content — whether it's a hero PDP image, a lifestyle composite, a 360-degree spin, or a social media crop — originates from the same place: the 3D model.

That model is the source of truth for how a product looks. Its geometry, materials, colors, and proportions are exact. Unlike photography, where every shoot is a new interpretation, the 3D model produces content that is dimensionally correct, color-accurate, and repeatable every time.

This has a consequence that matters at scale: when the source of truth updates, every piece of content derived from it can update too.

A product changes colorways mid-season. The material on a sole gets revised. A handle is redesigned. Update the 3D model, send it back through the pipeline, and the entire content package regenerates. Every angle, every channel format, every video. No reshoots. No re-retouching. No production coordinator chasing down which assets are stale.

This is how product data already works. Change a price in the ERP, and it propagates everywhere. Change a description in the PIM, and every channel reflects it. Product visual content has never had that property. With the Content API, it does.

2From Physical Product to Complete Content Package

The most direct version of what the API enables:

Scan a product. Send it through a pre-approved pipeline. Receive a complete, certified content package.

That's the full sequence. A physical product gets captured as a 3D model — through photogrammetry, a structured light scanner, or a phone-based capture tool. The model enters Glossi through the API. A pre-configured template applies the brand's approved lighting, camera angles, animation paths, and output specifications. Minutes later, the complete asset set is delivered, formatted for every channel the brand sells through:

  • Hero stills at every specified angle
  • Detail and close-up shots for product pages
  • Animated 360-degree spin videos
  • Cinematic camera path animations for marketing and social
  • Looping motion graphics for retail media and paid placements
  • Channel-specific crops and aspect ratios

Stills and video from the same template, the same render pass, the same source model.

No photographer. No studio booking. No retoucher. No production coordinator reformatting files for different marketplaces. The pipeline from physical object to published content collapses from weeks to minutes.

And because the 3D model persists as the source of truth, that content package isn't a one-time output. It's a living asset. New channel requirement from a retail partner? Run the model through an additional template. Seasonal campaign needs a different background treatment? Apply a new style configuration. Product revision? Update the model, and every downstream asset regenerates.

The content follows the product, not the other way around.

3How It Works

The API is built around three concepts: models, templates, and renders.

Models

3D product assets from Blender, CLO 3D, Autodesk, Rhino, SolidWorks, or scans. Auto-normalized on import.

Templates

Complete production recipes — cameras, lighting, animation, output specs. Built once, applied to any model.

Renders

The output — hero stills, 360 spins, video, motion graphics. Certified and channel-ready.

Models are 3D product assets — the source of truth. GLB, USDZ, FBX, OBJ, or any format exported from tools like Blender, CLO 3D, Autodesk Maya, Rhino, SolidWorks, or captured through 3D scanning. Upload them through the API or connect a watched folder. Glossi automatically detects product bounds, centers and scales the model, and prepares it for rendering. Models persist in the system — when a model updates, any template can be re-applied to regenerate the full content package.

Templates encode everything about how a product should be shot: camera angles, lighting rigs, animation paths, background settings, output formats. A template is a complete production recipe built and approved by your creative team. Build it once in the Glossi studio, then apply it to any model through the API without touching the studio again.

Renders are the output. Submit a model and a template ID. Glossi produces the full asset set — hero stills, detail shots, animated 360-degree spins, cinematic video, and looping motion graphics. Images and video render from the same scene in the same pass. Every output is framed consistently, lit consistently, and formatted to specification.

1
Upload
Model arrives via API, DAM, or watched folder
2
Normalize
Auto-detect bounds, center, scale
3
Template
Cameras, lighting, animation apply
4
Render
Unreal Engine 5 infrastructure
5
Deliver
Certified assets to DAM, PIM, or CDN

No one opens a tool. No one adjusts a camera. No one reformats for different channels. The entire pipeline runs in minutes. When the source model updates, steps 2 through 5 repeat automatically.

4Template-Driven Production

The template system is central to how the API creates consistency at scale.

Each template captures the full production setup: every camera position, every light, every animation keyframe, every output specification. When a brand's creative team builds a template for a product category, they're encoding their visual standard into a reusable, machine-executable recipe.

A footwear template might include: front view, left profile, back view, right profile, 45-degree hero shot, sole detail, an 11-second 360-degree spin video, and a cinematic orbit animation for social. All with studio-grade lighting calibrated to the brand's photography standards. All rendered at the resolutions and aspect ratios their channels require. Stills export as PNG or JPEG. Videos export as MP4. Both produced from the same scene, the same lighting, the same camera rig.

That template applies identically to every shoe in the catalog. Model 1 or model 500, the framing, lighting, and output quality are the same. This kind of visual consistency across hundreds of SKUs is difficult to achieve even in a physical studio. With the API, it's automatic.

Templates can also be layered. Run the same model through a PDP template, then a lifestyle template, then a social media template. One model, multiple content sets, all from a single API call specifying which templates to apply.

5Automatic Model Handling

One of the harder problems in automated 3D content production is that models arrive in different states. A CLO 3D export uses different units than a Blender export. An Autodesk Maya scene places the model at world origin; a SolidWorks STEP file is offset by 200 units and rotated 90 degrees. A 3D scan from a phone app has completely different scale than a CAD file.

The API handles this automatically. On import, Glossi detects the product bounds of each model, recenters it, and normalizes scale so that template-defined cameras and lighting work correctly regardless of how the source model was authored. This headless normalization pipeline means upstream teams don't need to change their export process. Models arrive as they are. Glossi adapts.

For organizations with consistent part naming or material tagging in their 3D models, the API also supports rule-based material assignment. If a part is tagged “sole_rubber,” the API can automatically apply the correct material. This is optional and progressive: teams can start with geometry-only automation and add material automation as their 3D pipelines mature.

6Integration, Not Isolation

The API is designed to fit into existing infrastructure, not replace it. Glossi connects to wherever your source of truth lives and delivers content wherever it needs to go.

DAM / Asset Management PIM / Product Data File-based / Google Drive 3D Scanning Pipelines REST API / Webhooks

Asset management platforms. VNTANA, the 3D asset management platform, has already built a working integration. When a brand approves a 3D asset in VNTANA, a webhook fires, the model routes to Glossi through an n8n workflow, renders are produced, and the finished imagery uploads directly back into VNTANA's render carousel. The brand user never leaves their asset management environment. Content appears automatically.

The integration supports metadata-driven routing. A field on the asset in VNTANA — a template ID, a tag, a simple flag — determines which Glossi template applies. Different product categories route to different templates. The brand's creative decisions are encoded once and executed everywhere.

PIM and DAM systems. Salsify, Akeneo, Syndigo, and similar platforms manage product data and content. The API connects content production to product lifecycle events. New SKU created in the PIM? Glossi produces the standard image set. Product model updated? Glossi regenerates every associated asset. Seasonal refresh triggered? Glossi re-renders affected products with updated templates.

This is where the source of truth concept becomes operationally powerful. The PIM is the source of truth for product data. The 3D model is the source of truth for product appearance. Glossi connects the two so that content stays synchronized with both.

File-based workflows. For teams that work with folders and files, the API supports watched folder triggers. Drop a GLB into a designated Google Drive folder. Renders appear in an output folder minutes later. No configuration beyond the initial setup.

3D scanning pipelines. For organizations capturing physical products as 3D models, the API closes the loop from scan to content. A product is scanned on the warehouse floor or in a design studio. The resulting model enters Glossi's pipeline. Minutes later, a complete, channel-ready content package is available for merchandising. The gap between physical product and published content disappears.

Custom integrations. The API is REST-based with standard authentication. Any system that can make an HTTP request can trigger Glossi production. Webhook callbacks notify when renders complete, or workflows can poll for status.

7Built for Workflow Automation and AI

The Content API is designed to be composable. Each step in the pipeline — model upload, normalization, template application, rendering, asset retrieval — is independently callable. This means Glossi slots into the middle of a larger automated workflow, not just at the beginning or end.

Example: n8n node-based workflow
Watch Folder
Trigger
Upload Model
API call
Glossi Render
Template + UE5
DAM
CMS
CDN

Visual workflow builders. The VNTANA integration runs entirely in n8n, a node-based workflow automation platform. Each API call is a node. The full pipeline — from watched folder trigger to rendered asset delivery — is a visual graph that a non-developer can build, modify, and monitor. The same pattern works in Make, Zapier, or any workflow tool that supports HTTP requests. No code required to wire Glossi into an existing automation stack.

AI agents and programmatic systems. The API follows standard REST conventions with structured JSON inputs and outputs. Any system capable of making an HTTP request can trigger content production, including AI agents with function-calling capabilities. An agent orchestrating a product launch can request content as one step in a larger sequence: pull product data from the PIM, generate copy, trigger Glossi for imagery, push everything to the CMS. Content production becomes one callable action in an agent's toolkit, not a separate manual process that blocks the workflow.

Composable by design. Glossi's endpoints can be called individually or chained together. Upload a model without rendering it. Render an existing model with a new template. Check status on a batch of jobs. Retrieve specific assets from a completed render. This granularity means builders can construct exactly the workflow they need rather than working around a monolithic process.

8What This Changes

Product content production has historically required a human in the loop for every asset. A photographer, a 3D artist, a retoucher, a production coordinator. That model works at low volume. It breaks at catalog scale.

The API changes the equation in three specific ways.

Minutes, not weeks

A full content set — stills, video, animation — for a new SKU takes minutes once templates are configured.

Automatic consistency

Every product in a category is shot with identical lighting, framing, and output specs. Consistency is a system property.

Living content

When the source model updates, content regenerates. When a new channel is added, existing models run through a new template.

9What Becomes Possible

The Content API is already being used across several workflows. Three patterns are emerging from early adopters.

Catalog-scale production. A brand with hundreds of SKUs per season configures templates once per product category. New models flow through the API as they're finalized in design. By the time a product is ready for merchandising, its content already exists.

Partner and marketplace distribution. Brands producing content for multiple retail partners, each with different image specifications, run the same model through channel-specific templates. One source model produces Amazon-compliant, Shopify-compliant, and retailer-specific assets in a single batch.

Scan-to-content pipelines. Organizations capturing physical products as 3D models now have a direct, automated path from the scan to a complete content package. The 3D model becomes the bridge between the physical product and every piece of visual content that represents it. Scan once. Produce everything.

Get API access

Talk to our team about integrating the Content API into your production workflow.

Contact sales