Solution · Copyright detection
Reuse detection for media ownership workflows
MediaLayer matches incoming media against a rights-holder catalog so reused clips, transcoded audio, and copied images are visible inside ownership and monetization workflows. Same image, video, and audio matching primitives — applied to a different question.
The problem
Where this hurts in production
Reused video clips inside UGC
Creator uploads splice in licensed clips with re-encoding, cropping, or letterboxing. Hash-based dedupe misses the resulting near-duplicates.
Audio reuse and transcoding
Music, podcast clips, and voice-over reappear inside other uploads after format conversion or pitch-shifting. You need fingerprinting that survives the transform, not exact-match audio hashing.
Image asset republishing
Brand photography and licensed stock get reposted as part of UGC. Watermarks get cropped; colors get retouched. Reverse-image search alone is uneven — perceptual matching is more consistent.
Ownership decisions without overlap data
It's not enough to know two assets match — ownership and monetization workflows need to know which seconds, which frames, and how strongly. Aligned matched segments make that decision auditable.
How MediaLayer fits
Same APIs. Same JSON envelope. Targeted at this workflow.
MediaLayer wraps image, video, and audio similarity behind one JSON request shape. POST two URLs to the matching endpoint for that media type and you get a similarity score, a confidence label, and aligned matched segments — start and end timestamps for video and audio, so ownership tooling can show exactly which seconds overlap with which reference asset.
Matching is built for the obfuscation real reuse goes through: re-encoding, resizing, cropping, letterboxing, watermark removal, audio transcoding, and pitch-shifting. The aligned-segments output lets ownership and monetization workflows action overlap by duration, not just by a coarse match-or-no-match flag.
For rights-holder catalogs that have grown past pairwise comparisons, the same primitives map onto the Enterprise media search surface: ingest the rights-holder catalog into a similarity index, run one-to-many search against every new upload, and return ranked matches with overlap timestamps. Talk to MediaLayer AI Labs about direct API access, private deployment, and bulk ingestion.
Operationally, ownership pipelines call the API server-to-server with their own x-rapidapi-key (or, on direct-API enterprise deployments, with private credentials and a private endpoint). URL validation rejects private, loopback, and cloud-metadata addresses, which is the right default when ingestion handlers receive URLs from third-party publishers. Aligned matched-segment output is stable across responses, so monetization decisions can use overlap-duration thresholds and stay auditable as creator dispute volumes grow — and the same JSON envelope keeps rights-decision audit logs uniform across image, video, and audio matches.
Workflow example
From media in to match decision out
- 1
Receive the upload
Pull the new image, video, or audio URL from the publishing or moderation pipeline.
- 2
Match against rights catalog
POST source_url + target_url to /image/match, /video/match, or /audio/match.
- 3
Read aligned segments
matched_segments tells you exactly which seconds or frames overlap with the rights-held asset.
- 4
Drive the ownership decision
Use overlap duration + similarity_score to route to allow, share-revenue, hold, or block lanes.
- 5
Scale to the catalog
When the rights catalog is large, swap pairwise calls for Enterprise one-to-many search.
{
"match": true,
"confidence": "high",
"similarity_score": 0.89,
"processing_time_ms": 1820,
"media_type": "video",
"matched_segments": [
{ "source_start": 12.4, "source_end": 28.9, "target_start": 4.1, "target_end": 20.6, "score": 0.91 },
{ "source_start": 60.0, "source_end": 67.4, "target_start": 88.3, "target_end": 95.7, "score": 0.84 }
]
}Relevant API endpoints
Drop these into your pipeline
POST /video/match
Compare two videos and surface aligned matched segments with per-segment scores. The right primitive for clip-level reuse.
Learn more →POST /audio/match
Match audio recordings even after transcoding or partial reuse. Returns offset-aligned overlapping segments.
Learn more →POST /image/match
Detect reused images and brand photography even after watermarking, cropping, or re-encoding.
Learn more →Real-world examples
Patterns we see in this space
Creator-platform monetization
Match every new upload against the rights-holder catalog and route monetization based on aligned overlap duration, not coarse flags.
Stock library protection
Detect republished stock photography across surfaces with watermark-resilient image matching.
Music & podcast rights
Audio fingerprint uploads against music and podcast libraries to flag clip-level reuse and feed share-revenue or takedown workflows.
Related
Keep exploring
Content moderation
Repeated harmful media, duplicate uploads, and review-queue clustering.
Open →Ad-tech creative audit
Reused creatives and recycled placements with the same matching primitives.
Open →Enterprise media search
One-to-many search against millions of indexed assets — for catalogs that have outgrown pairwise comparisons.
Open →Audio Fingerprinting API
Deeper docs on the /audio/match endpoint, including supported codecs and aligned-segment responses.
Open →Video playground
Try /video/match in your browser before wiring it into a rights-aware ingestion pipeline.
Open →Ready to ship?
Start with the public API or talk to us about scale.
Try the public endpoints on RapidAPI, or talk to MediaLayer AI Labs about high-volume access, private deployment, and custom rate limits.
Public API access is distributed through RapidAPI. Enterprise direct API access is available only after onboarding.
Looking for something else? Contact us.