You may have noticed that we talk about content tagging and machine learning, alot. That’s because it’s important.

It’s important on a basic level, because without tagging you would never be able to find any digital file, and it’s important on a business level because tagging content intelligently, plays an integral role in ensuring your marketing content is future-proof and future-ready.

Any digital asset management system worth its salt will use a relatively smart tagging system to ensure uploaded files can be found again by the user. But for an enterprise brand’s rapidly compounding pool of content, created across multiple creative partners, smart tagging just isn’t going to cut it.

To be high-functioning and truly valuable to the business, the system should use machine learning to tag, not just the image or video file, but the individual components within it. It should recognise what’s actually important to your business, not just deliver generic tags such as plane, car or hamburger. And it should encompass usage, production and copyright. This makes the content ‘intelligent’. And that’s where things get really exciting. That’s why Collaboro isn’t a DAM, it’s an Intelligent Content Engine (ICE).

So, how does the machine learned tagging work? Allow us to explain.

Using a combination of machine learning and human smarts, the system analyses and understands objects, people, text and scenes within imagery and video, to add multiple layers of searchable metadata.

These searchable layers can be broken down into 9 components.

Metadata Layer 1: File Information

This is the basic information auto-generated by operating systems. Things like creation date, file type, file size, file name, and a few other parameters. Basic stuff, but handy nonetheless.

Metadata Layer 2: Embedded capture information

When cameras (and phones) capture image and video, they also store some relevant Metadata. Things like camera type, frame size, frame rate, technical capture information, but crucially GPS location information. The Collaboro ICE extrapolates location information and makes it searchable.

Metadata Layer 3: Production information

Taking information from the callsheets, job sheets, project management platforms, agency production reports, and any other campaign information. ICE can make personnel involved in the production or creation, such as the producer, director, ad agency, product, campaign, job number… all searchable.

Metadata Later 4: Copyright and usage information

A set of information detailing the copyright holder, licensing agreement detail, usage contracting for talent, music, VO, photographer, contact information – all the things that allow brands to utilise their assets easily and with clarity.

Metadata Layer 5: Bespoke customer-centric keywords

ICE uses a bespoke thesaurus of keywords that reflect a brand’s specific use case. They are specific and detailed – often product or infrastructure terms that are critically important and often used internally. For some of our clients, ‘hamburger’ is far too generic to be useful.

Metadata layer 6: Basic object recognition

Broad keywords that are populated as searchable metadata under an automated AI framework. Basic visual elements and objects: for example plane, car, dog, man, day, night, water, park.

Metadata Layer 7: Facial Recognition

Dovetailing with Microsoft’s AI-driven facial recognition toolkit, ICE can also teach the toolkit new faces that are relevant to each brand, as well as emotional recognition.

Metadata Layer 8: Audio Transcription

ICE delivers searchable transcripts of all spoken words in any video. This is especially powerful for interviews and longer form content.

Metadata Layer 9: Onscreen Character Recognition

ICE recognises and tags words that appear as visuals on-screen. From supers to background signage, turning visual words into searchable tags.

In a nutshell…

Machine learning and bespoke tagging means each asset becomes highly searchable, deeply collated and therefore both user-friendly by anyone involved with a brand’s marketing function now and well into the future; and easily surfacable in an ever-larger pool of content. 

But it also opens the door for video or imagery being created right now, to be repurposed at an exciting time in the not-too-distant future.

A time where marketers will be able to extract certain components from their assets, in order to target their audience on an individual level, in a fully automated process. And that’s how intelligent content now, becomes the marketing content of the future.