Whatever you need,
we catch it

Our AI software is based on cutting edge research in video and audio understanding. We continually develop new features and improve our algorithms AND we cover all major AI recognition services.

You can leverage our collection of features in many ways. Combined, the features gives realtime value and insight into your media exposure. Individually, we can help you dive deep into a specific need and deliver precise and valuable data to help you make the right decisions.

Logo recognition

The MediaCatch logo recognition feature is not generic, but specifically trained to find any logo across all media channels. It delivers extreme accuracy and detailed data regarding: size, location, duration and prominence of the logo. It can apply to different use cases, from advanced sponsorship analytics, to tracking and quantifying your company's exposure in selected channels.

API & Data export

Our API & Data export feature allows you full flexibility in access to your data. You can export all or selected parts/periods of your data into a variety of formats - or you can get API access to our platform and let the data start flowing into a BI system of your choice. It's your data - and we'll let you glean the insights hidden within it wherever it makes sense to you.

Text recognition

Using lightning-fast OCR (Optical Character Recognition), the MediaCatch text recognition feature scans any type of media for words, topics or brand names of your choice. This enables your team to access a more granular analysis of media exposure. We have seen examples of this feature finding critical mentions of companies in for example TV graphics - mentions that regular media monitoring analysis would have missed.

Gender, age and ethnic recognition

This advanced AI feature has been developed to determine the gender, age and ethnic origin of people exposed in audio and visual media. Trained on extensive datasets, the service delivers instant results with astounding accuracy, allowing media companies and commercial corporations to monitor diversity across all communications.

24/7 Monitoring

Our list of sources is expanding rapidly. Currently, we monitor:

15 danish TV channels

14 danish radiostations

Top 200 danish podcasts

600+ websites

Instagram under development

Sentiment recognition

While tracking exposure is crucial to both individuals and companies, understanding the quality and tone of that exposure is also pivotal. We are currently developing an advanced sentiment recognition feature, that is being trained to discern between the overall tone of a piece of content and sentiment pertaining to individual entities within the content. Once finished, this feature will be introduced into our various products.

Custom dashboard

The MediaCatch custom dashboard provides the detailed data overview you always wanted. All data gathered is presented in our user friendly dashboard - with built in options to export data, and get real time updates based on the keywords you search for.

Coming soon: cutting edge AI functionality like prompt-based analysis and pattern recognition allowing you to identify brewing shitstorms.

Face recognition

MediaCatchs unique face recognition feature can search for faces across all media sources, and help you to identity exposures of key talent, spokespersons or other people of interest with +95% accuracy*. And with deep fakes emerging as an increasing worry – we’ll also find the exposures you weren’t expecting! Face ID can also categorize the mood of the person exposed.

* Requires GDPR consent from the persons being searched for.

Keyword recognition

Ever wish you could listen to more than one thing at once? MediaCatch keyword recognition feature can listen to an unlimited number of audio sources, and identify any and every mention of a keyword or set of keywords that you define. While not yet advanced enough to recognize entire sentences, keyword recognition is used to pick up mentions at scale - in everything from podcasts to TikToks.

Object recognition

The MediaCatch object recognition feature can identify a vast variety of objects exposed within a picture or video. It is constantly getting better through concurrent training for superior performance, and to deliver detailed results. This feature can, for example, be used to create metadata for photos and videos – instantly making legacy libraries searchable, as well as find exposure of objects of interest across all media types.

Speech-to-text

Ever wish you could listen to more than one thing at a time? The MediaCatch speech-to-text feature can listen to an unlimited number of audio sources, instantly transcribing it perfectly and identifying any and every mention of interest to you - in everything from podcasts and broadcasts to TikTok and other social media.

Catchscore

We quantify our tracking, so you have an easy metric to compare media and platforms against each other. Prime time TV exposure rates higher than a mention on your neighbours blog.