Poster thumbnail
https://pbs.twimg.com/amplify_video_thumb/2052776025646678016/img/XAcB5pzNEsBHSw1d.jpg
Robert Scoble connects Tim Berners-Lee, the Semantic Web, Kingsley Uyi Idehen, AI agents, and OpenLink in a post whose main evidence is an extracted hour-long video.
The video is modeled as a schema:VideoObject with poster thumbnail, duration, 1080p content URL, HLS playlist, and lower MP4 variants.
https://pbs.twimg.com/amplify_video_thumb/2052776025646678016/img/XAcB5pzNEsBHSw1d.jpg
application/x-mpegURL. Adaptive streaming playlist for the X-hosted video.
video/mp4. Low-resolution MP4 rendition exposed in X video metadata.
video/mp4. Medium-resolution MP4 rendition exposed in X video metadata.
video/mp4. HD MP4 rendition exposed in X video metadata.
video/mp4. Full-HD MP4 rendition exposed in X video metadata; HTTP headers reported content length of 761,900,515 bytes.
The text from the X initial state is preserved separately from the video evidence.
Visible entities resolve through URIBurner describe links over RDF hash IRIs.
Author of the X post; profile describes him as covering Silicon Valley AI, robots, holodecks, BCIs, and analysis of new things.
Mentioned in the post as the person demonstrating how AI changes Web surfing and agent interaction.
Referenced as author of the Semantic Web paper.
The Web vision centered on machine-readable meaning, identifiers, relationships, and data interoperability.
The linked OpenLink site that Scoble says makes the Web 100 times more useful.
Linked Data resolver and description service used by this HTML and Markdown for entity hyperlinks.
RDF database, SPARQL endpoint, and Linked Data server backing URIBurner.
Software agents that can browse, interpret, retrieve, and act on Web resources.
The post links Semantic Web history to AI-agent-era Web navigation.
Robert Scoble frames Tim Berners-Lee's Semantic Web paper as a 25-year-old vision that becomes practical in an AI-agent era.
The post says AI is changing how people and AI agents can surf the Web.
The post states that OpenLink makes the Web 100 times more useful.
Scoble credits Kingsley Uyi Idehen with showing how AI changes Web navigation and interaction.
The attached hour-long video is the primary source for the demonstration referenced by the post.
The post implies that structured, resolvable Web data helps AI agents act on Web resources more effectively.
Making Web resources machine-describable can improve discovery, interpretation, and action by people and agents.
AI agents benefit when Web identifiers resolve to descriptions that expose relationships, context, and provenance.
The OpenLink reference places the discussion in an open Web and Linked Data frame rather than a closed app-only frame.
Extracting the video variants, poster, duration, and media URLs preserves the evidence layer that a text-only RDF model would lose.
The post connects the 2001 Semantic Web vision with 2026-era AI-agent behavior and tooling.
URIBurner describe links over RDF hash IRIs let visible entities in the HTML and Markdown resolve into linked data descriptions.
All extracted video variants are modeled in RDF and linked here for direct inspection.
application/x-mpegURL
https://video.twimg.com/amplify_video/2052776025646678016/pl/Ur2kzz2nDg6bblMw.m3u8?tag=27
video/mp4
video/mp4
video/mp4
video/mp4
Engagement, author, and video metadata extracted from X page state and media headers.
Questions and answers are named RDF resources.
The post links to OpenLink and says it makes the Web 100 times more useful.
X metadata reports 4,195,432 milliseconds, modeled as approximately PT1H9M55S.
Terms and definitions link into the RDF graph.
A method for publishing structured data using resolvable identifiers and typed relationships.
An identifier formed by appending a fragment to a base document URL to denote a specific entity.
A hyperlink that sends an entity IRI to a description service such as URIBurner.
A specific MP4 rendition of the same source video at a given resolution and bitrate.
An identifier that can be resolved to useful descriptions or data about the thing it denotes.
The extraction workflow used to produce the artifacts.
Retrieve the X status HTML and inspect embedded initial state for tweet, user, and media metadata.
Capture the status ID, full text, creation time, engagement metrics, author ID, mentions, and expanded URLs.
Capture the X video media key, thumbnail URL, duration, original dimensions, HLS playlist, and MP4 variants.
Use the highest-resolution MP4 as schema:contentUrl while preserving HLS and lower MP4 variants as associated media.
Create named resources for the post, author, mentioned people, OpenLink, Semantic Web, AI agents, video, metrics, FAQ, glossary, and HowTo.
Embed a video element using the remote MP4 with preload metadata and include the poster thumbnail and media links.
Parse RDF, HTML, and JSON-LD, check resolver links, and save HTML, Markdown, and Turtle to the configured output folders.