Semantic Web, AI Agents, and OpenLink

Robert Scoble connects Tim Berners-Lee, the Semantic Web, Kingsley Uyi Idehen, AI agents, and OpenLink in a post whose main evidence is an extracted hour-long video.

PT1H9M55SExtracted video duration.
1920x1080Source video dimensions.
761,900,515 bytes1080p MP4 content length.

Extracted video

The video is modeled as a schema:VideoObject with poster thumbnail, duration, 1080p content URL, HLS playlist, and lower MP4 variants.

Tweet text

The text from the X initial state is preserved separately from the video evidence.

Tim Berners-Lee wrote the semantic web paper 25 years ago. But it wasn't possible until now. @kidehen shows how deeply AI is changing how we (and even AI agents) can surf the web. https://www.openlinksw.com/ makes the Web 100x more useful. I spend an hour with him getting a

Named entities

Visible entities resolve through URIBurner describe links over RDF hash IRIs.

Robert Scoble

Author of the X post; profile describes him as covering Silicon Valley AI, robots, holodecks, BCIs, and analysis of new things.

Kingsley Uyi Idehen

Mentioned in the post as the person demonstrating how AI changes Web surfing and agent interaction.

Semantic Web

The Web vision centered on machine-readable meaning, identifiers, relationships, and data interoperability.

OpenLink Software

The linked OpenLink site that Scoble says makes the Web 100 times more useful.

URIBurner

Linked Data resolver and description service used by this HTML and Markdown for entity hyperlinks.

OpenLink Virtuoso

RDF database, SPARQL endpoint, and Linked Data server backing URIBurner.

AI agents

Software agents that can browse, interpret, retrieve, and act on Web resources.

Core claims

The post links Semantic Web history to AI-agent-era Web navigation.

Semantic Web timing claim

Robert Scoble frames Tim Berners-Lee's Semantic Web paper as a 25-year-old vision that becomes practical in an AI-agent era.

Open Web advantage

The OpenLink reference places the discussion in an open Web and Linked Data frame rather than a closed app-only frame.

Media extraction matters

Extracting the video variants, poster, duration, and media URLs preserves the evidence layer that a text-only RDF model would lose.

Historical continuity

The post connects the 2001 Semantic Web vision with 2026-era AI-agent behavior and tooling.

Media variants

All extracted video variants are modeled in RDF and linked here for direct inspection.

Metrics

Engagement, author, and video metadata extracted from X page state and media headers.

PT1H9M55SVideo duration. X video metadata reports 4,195,432 milliseconds, approximately one hour, nine minutes, and fifty-five seconds.
1920x1080Source video resolution. The source video metadata reports a 16:9 video with 1920 by 1080 original dimensions.
761,900,515 bytes1080p MP4 content length. The 1080p MP4 HEAD response reported a 761,900,515 byte content length.
6Likes. X initial state reported six likes at fetch time.
1Replies. X initial state reported one reply at fetch time.
1Reposts. X initial state reported one repost at fetch time.
1Quotes. X initial state reported one quote at fetch time.
3Bookmarks. X initial state reported three bookmarks at fetch time.
579,501Robert Scoble followers. X user metadata reported 579,501 followers for Robert Scoble.
247,323Author posts. X user metadata reported 247,323 posts for Robert Scoble.

FAQ

Questions and answers are named RDF resources.

What is this X post about?

Robert Scoble argues that the Semantic Web vision becomes newly practical because AI agents can now use structured Web descriptions.

Why is Tim Berners-Lee mentioned?

The post anchors the discussion in the Semantic Web paper associated with Tim Berners-Lee and its roughly 25-year arc.

Who is Kingsley Uyi Idehen in the post?

Kingsley is mentioned as the person showing Scoble how AI is changing Web surfing for people and agents.

What role does OpenLink play?

The post links to OpenLink and says it makes the Web 100 times more useful.

Why is the video important?

The video is the primary evidence for the demonstration; it contains the hour-long interaction referenced by the post.

Which video URL should be treated as primary?

The 1920x1080 MP4 variant is modeled as the primary content URL, with HLS and lower MP4 variants preserved as associated media.

How long is the video?

X metadata reports 4,195,432 milliseconds, modeled as approximately PT1H9M55S.

What does AI-agent Web surfing mean?

It means agents can use structured identifiers, descriptions, and relationships to understand and act on Web resources.

What is the Linked Data angle?

Linked Data gives agents dereferenceable identifiers and graph-shaped context that can be used across sources.

What does this artifact preserve?

It preserves tweet text, author metadata, engagement metrics, video metadata, media variants, glossary, FAQ, HowTo, and provenance.

Why avoid text-only modeling?

A text-only model would lose the attached video, thumbnail, duration, encoding formats, and source evidence needed for reuse.

How are visible entities linked?

Visible entities link through URIBurner describe URLs using the url query parameter over RDF hash IRIs.

Glossary

Terms and definitions link into the RDF graph.

HowTo

The extraction workflow used to produce the artifacts.

01

Fetch the public X page

Retrieve the X status HTML and inspect embedded initial state for tweet, user, and media metadata.

02

Extract the tweet entity

Capture the status ID, full text, creation time, engagement metrics, author ID, mentions, and expanded URLs.

03

Extract the video object

Capture the X video media key, thumbnail URL, duration, original dimensions, HLS playlist, and MP4 variants.

05

Model entities with RDF hash IRIs

Create named resources for the post, author, mentioned people, OpenLink, Semantic Web, AI agents, video, metrics, FAQ, glossary, and HowTo.