@base <https://kingsley.idehen.net/DAV/data/2026/05/semweb-llm-symbiosis/> .
@prefix schema: <http://schema.org/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix xhv: <http://www.w3.org/1999/xhtml/vocab#> .

# ── Main Analysis ───────────────────────────────────────────────────
<#analysis> a schema:Article, schema:ScholarlyArticle ;
  schema:name "Semantic Web & LLM Symbiosis: The Agentic Web"@en ;
  schema:headline "What's Old Is New Again: LLMs as the Generic RDF Client That Realizes the 2001 Semantic Web Vision"@en ;
  schema:dateCreated "2026-05-02" ;
  schema:author <#kingsley-idehen> ;
  schema:abstract "A unified knowledge graph tracing the arc from the 2001 Scientific American 'The Semantic Web' vision through the Semantic Web Layer Cake tweak (2017), the realization that the project didn't fail — it was waiting for AI (2025), and the emergence of LLMs as powerful generic RDF clients (2025), culminating in the 2026 thesis that the Web's original design is being realized through AI-driven acceleration under the banner of 'what's old is new again.'"@en ;
  schema:about
    <#semantic-web>,
    <#large-language-models>,
    <#rdf>,
    <#linked-data>,
    <#agentic-web>,
    <#layer-cake>,
    <#knowledge-graph>,
    <#schema-org>,
    <#virtuoso> ;
  schema:hasPart <#timeline>, <#layer-cake-section>, <#llm-rdf-client-section>, <#semweb-waiting-section>, <#whats-old-is-new-section>, <#faq>, <#glossary>, <#howto> ;
	  schema:isBasedOn
	    <https://www.linkedin.com/posts/kidehen_web-awww-semanticweb-activity-7455264301948301313-xGw_>,
	    <https://www.linkedin.com/pulse/large-language-models-llms-powerful-generic-rdf-clients-idehen-xwhfe/>,
	    <https://www.linkedin.com/pulse/semantic-web-project-didnt-fail-waiting-ai-yin-its-yang-idehen-j01se/>,
	    <https://www.lassila.org/publications/2001/SciAm.pdf>,
	    <https://medium.com/openlink-software-blog/semantic-web-layer-cake-tweak-explained-6ba5c6ac3fab>,
	    <https://medium.com/openlink-software-blog/what-is-dbpedia-and-why-is-it-important-d306b5324f90>,
	    <https://medium.com/virtuoso-blog/on-the-mutually-beneficial-nature-of-dbpedia-and-wikidata-5fb2b9f22ada>,
	    <https://medium.com/virtuoso-blog/what-is-a-sparql-endpoint-and-why-is-it-important-b3c9e6a20a8b>,
	    <https://medium.com/virtuoso-blog/what-is-a-virtuoso-sparql-endpoint-and-why-is-it-important-5244df738a3e> .

<#kingsley-idehen> a schema:Person ;
  schema:name "Kingsley Uyi Idehen"@en ;
  schema:jobTitle "Founder & CEO, OpenLink Software"@en ;
  schema:url <https://www.linkedin.com/in/kidehen/> ;
  schema:affiliation <#openlink-software> .

<#openlink-software> a schema:Organization ;
  schema:name "OpenLink Software"@en ;
  schema:url <https://www.openlinksw.com/> .

# ── Source Documents ─────────────────────────────────────────────────
<#source-linkedin-post-2026> a schema:SocialMediaPosting ;
  schema:name "what's old is new again — Web Design Issues revisited"@en ;
  schema:url <https://www.linkedin.com/posts/kidehen_web-awww-semanticweb-activity-7455264301948301313-xGw_> ;
  schema:datePublished "2026-04-29" ;
  schema:author <#kingsley-idehen> ;
  schema:text "As we continue our AI-driven acceleration into a 'what's old is new again' cycle, it's important to revisit Tim Berners-Lee's original Design Issues document collection. The design for what is still unfolding today was laid out long ago in open form, using open standards. The tension we see today has less to do with technology and more to do with business models."@en ;
  schema:mentions <#design-issues>, <#tim-berners-lee> .

<#source-llm-rdf-client> a schema:Article ;
  schema:name "Large Language Models (LLMs) as Powerful Generic RDF Clients"@en ;
  schema:url <https://www.linkedin.com/pulse/large-language-models-llms-powerful-generic-rdf-clients-idehen-xwhfe/> ;
  schema:datePublished "2025-08-23" ;
  schema:author <#kingsley-idehen> ;
  schema:description "LLMs fill the role of a generic RDF client — what Mosaic/Netscape did for HTML, LLMs do for RDF, enabling an Agentic Web where AI agents act as intermediaries between humans and structured data."@en .

<#source-semweb-waiting> a schema:Article ;
  schema:name "The Semantic Web Project Didn't Fail — It Was Waiting for AI (The Yin of its Yang)"@en ;
  schema:url <https://www.linkedin.com/pulse/semantic-web-project-didnt-fail-waiting-ai-yin-its-yang-idehen-j01se/> ;
  schema:datePublished "2025-06-14" ;
  schema:author <#kingsley-idehen> ;
  schema:description "The Semantic Web has quietly evolved into the foundational layer of today's Web, made practically usable by AI and LLMs after being long dismissed as a marketing failure."@en .

<#source-sciam-2001> a schema:Article ;
  schema:name "The Semantic Web"@en ;
  schema:url <https://www.lassila.org/publications/2001/SciAm.pdf> ;
  schema:datePublished "2001-05" ;
  schema:author <#tim-berners-lee>, <#james-hendler>, <#ora-lassila> ;
  schema:publisher "Scientific American"@en ;
  schema:description "The foundational vision: intelligent software agents automating tasks across a web of machine-readable data, using ontologies and logical inference — the Pete and Lucy scenario."@en .

<#source-dbpedia> a schema:Article ;
  schema:name "What is DBpedia, and why is it important?"@en ;
  schema:url <https://medium.com/openlink-software-blog/what-is-dbpedia-and-why-is-it-important-d306b5324f90> ;
  schema:datePublished "2016-07-22" ;
  schema:author <#kingsley-idehen> ;
  schema:description "DBpedia extracts structured data from Wikipedia and publishes it as Linked Open Data, serving as the critical kernel around which the LOD Cloud blossomed. It makes Wikipedia knowledge compatible with SPARQL, business intelligence, NLP, and AI."@en .

<#source-dbpedia-wikidata> a schema:Article ;
  schema:name "On the Mutually Beneficial Nature of DBpedia and Wikidata"@en ;
  schema:url <https://medium.com/virtuoso-blog/on-the-mutually-beneficial-nature-of-dbpedia-and-wikidata-5fb2b9f22ada> ;
  schema:datePublished "2017-02-11" ;
  schema:author <#kingsley-idehen> ;
  schema:description "DBpedia and Wikidata are complementary, not competitive: DBpedia extracts structured data from Wikipedia documents while Wikidata creates Linked Open metadata to supplement them. Federated SPARQL queries across both endpoints demonstrate their combined power."@en .

<#source-sparql-endpoint> a schema:Article ;
  schema:name "What is a SPARQL Endpoint, and why is it important?"@en ;
  schema:url <https://medium.com/virtuoso-blog/what-is-a-sparql-endpoint-and-why-is-it-important-b3c9e6a20a8b> ;
  schema:datePublished "2018-08-04" ;
  schema:author <#kingsley-idehen> ;
  schema:description "A SPARQL Endpoint is an HTTP-based point of presence capable of receiving and processing SPARQL Protocol requests — analogous to an ODBC/JDBC data source but for the Semantic Web of Linked Data."@en .

<#source-virtuoso-endpoint> a schema:Article ;
  schema:name "What is a Virtuoso SPARQL Endpoint, and why is it important?"@en ;
  schema:url <https://medium.com/virtuoso-blog/what-is-a-virtuoso-sparql-endpoint-and-why-is-it-important-5244df738a3e> ;
  schema:datePublished "2018-11-20" ;
  schema:author <#kingsley-idehen> ;
  schema:description "Every Virtuoso instance includes a SPARQL endpoint supporting federated queries, anytime query, SPARQL-BI, and multiple output formats. Virtuoso's endpoint played a pivotal role in the LOD Cloud anchored by DBpedia."@en .

<#source-layer-cake> a schema:Article ;
  schema:name "Semantic Web Layer Cake Tweak, Explained"@en ;
  schema:url <https://medium.com/openlink-software-blog/semantic-web-layer-cake-tweak-explained-6ba5c6ac3fab> ;
  schema:datePublished "2017-07-13" ;
  schema:author <#kingsley-idehen> ;
  schema:description "A tweak to the W3C Semantic Web Layer Cake that corrects the XML misconception, adds multiple RDF document types, and maps each layer to explicit business benefits."@en .

# ── Key People ───────────────────────────────────────────────────────
<#tim-berners-lee> a schema:Person ;
  schema:name "Tim Berners-Lee"@en ;
  schema:url <https://www.w3.org/People/Berners-Lee/> ;
  owl:sameAs <http://dbpedia.org/resource/Tim_Berners-Lee> .

<#james-hendler> a schema:Person ;
  schema:name "James Hendler"@en ;
  owl:sameAs <http://dbpedia.org/resource/James_Hendler> .

<#ora-lassila> a schema:Person ;
  schema:name "Ora Lassila"@en ;
  owl:sameAs <http://dbpedia.org/resource/Ora_Lassila> .

<#john-sowa> a schema:Person ;
  schema:name "John F. Sowa"@en .

<#richard-macmanus> a schema:Person ;
  schema:name "Richard MacManus"@en .

# ── Key Concepts ─────────────────────────────────────────────────────
<#semantic-web> a skos:Concept, schema:DefinedTerm ;
  schema:name "Semantic Web"@en ;
  schema:description "An extension of the World Wide Web in which information is given well-defined meaning, enabling computers and people to work in cooperation. First articulated in Scientific American, May 2001."@en ;
  owl:sameAs <http://dbpedia.org/resource/Semantic_Web> .

<#large-language-models> a skos:Concept, schema:DefinedTerm ;
  schema:name "Large Language Models (LLMs)"@en ;
  schema:description "AI systems trained on vast text corpora that function as generic RDF clients — translating structured data into natural language, harmonizing across sources, and grounding outputs in knowledge graphs to mitigate hallucinations."@en .

<#rdf> a skos:Concept, schema:DefinedTerm ;
  schema:name "Resource Description Framework (RDF)"@en ;
  schema:description "An abstract language for the systematic use of signs (identifiers), syntax (subject→predicate→object), and semantics on the Web. Its power comes not from any specific serialization but from this abstract model."@en ;
  owl:sameAs <http://dbpedia.org/resource/Resource_Description_Framework> .

<#linked-data> a skos:Concept, schema:DefinedTerm ;
  schema:name "Linked Data"@en ;
  schema:description "A method of publishing structured data using HTTP URIs so that it can be interlinked and become more useful through semantic queries. Follows Tim Berners-Lee's four Linked Data Principles."@en ;
  owl:sameAs <http://dbpedia.org/resource/Linked_data> .

<#agentic-web> a skos:Concept, schema:DefinedTerm ;
  schema:name "Agentic Web"@en ;
  schema:description "An era where AI agents act as intermediaries between humans and complex structured data, automating queries, generating visual outputs, and supporting semantic mashups — the realization of the 2001 SciAm vision through LLM + RDF symbiosis."@en .

<#layer-cake> a skos:Concept, schema:DefinedTerm ;
  schema:name "Semantic Web Layer Cake"@en ;
  schema:description "A layered architecture diagram of Semantic Web standards, from identifiers (URIs/IRIs) through RDF abstract language, dictionaries/ontologies, query (SPARQL), rules (SWRL/SHACL/R2RML), unifying logic, proof, trust, and smart applications. The 2017 tweak corrects the XML misconception."@en .

<#design-issues> a schema:CreativeWork ;
  schema:name "W3C Design Issues"@en ;
  schema:url <https://www.w3.org/DesignIssues/Overview.html> ;
  schema:author <#tim-berners-lee> ;
  schema:description "Tim Berners-Lee's original design documents for the World Wide Web, laying out the architecture that still underpins today's Web."@en .

<#schema-org> a schema:Organization, skos:Concept ;
  schema:name "Schema.org"@en ;
  schema:url <http://schema.org/> ;
  schema:description "A collaborative vocabulary project founded by Google, Microsoft, Yahoo, and Yandex providing structured data schemas for web content. Over 90% of web pages now embed schema.org-based metadata."@en .

<#virtuoso> a schema:SoftwareApplication, schema:Product ;
  schema:name "Virtuoso Universal Server"@en ;
  schema:url <https://virtuoso.openlinksw.com/> ;
  schema:manufacturer <#openlink-software> ;
  schema:description "A multi-model database and application server platform that natively supports RDF, SPARQL, SQL, and WebDAV, serving as the infrastructure backbone for the Linked Open Data cloud."@en .

<#mcp> a skos:Concept, schema:DefinedTerm ;
  schema:name "Model Context Protocol (MCP)"@en ;
  schema:description "An open protocol enabling LLM clients to connect to external tools and data sources through a standardized interface, bridging AI agents with structured knowledge."@en .

<#sparql> a skos:Concept, schema:DefinedTerm ;
  schema:name "SPARQL"@en ;
  schema:description "The standard query language for RDF data. SPARQL extends SQL rather than replacing it — protecting existing ODBC/JDBC/ADO.NET investments while adding graph query capabilities."@en ;
  owl:sameAs <http://dbpedia.org/resource/SPARQL> .

<#posh> a skos:Concept, schema:DefinedTerm ;
  schema:name "Plain Old Semantic HTML (POSH)"@en ;
  schema:description "Using standard HTML elements and attributes to embed machine-readable metadata directly in web pages, providing a viable best practice for semantic markup without requiring XML."@en .

# ── Timeline Section ─────────────────────────────────────────────────
<#timeline> a schema:ArticleSection ;
  schema:name "Timeline: Semantic Web → Agentic Web"@en ;
  schema:position 1 ;
  schema:hasPart <#t-2001>, <#t-2006>, <#t-2011>, <#t-2017>, <#t-2022>, <#t-2025>, <#t-2026> .

<#t-2001> a schema:Event ;
  schema:name "Scientific American publishes 'The Semantic Web'"@en ;
  schema:startDate "2001-05" ;
  schema:about <#semantic-web>, <#tim-berners-lee>, <#james-hendler>, <#ora-lassila> ;
  schema:description "Berners-Lee, Hendler, and Lassila publish the foundational vision: intelligent software agents automating tasks across machine-readable web content. The famous 'Pete and Lucy' scenario is born."@en .

<#t-2006> a schema:Event ;
  schema:name "Linked Data Principles articulated"@en ;
  schema:startDate "2006" ;
  schema:about <#linked-data>, <#tim-berners-lee> ;
  schema:description "Tim Berners-Lee publishes the four Linked Data Principles, providing the practical blueprint for publishing structured data on the Web using HTTP URIs and RDF."@en .

<#t-2011> a schema:Event ;
  schema:name "Schema.org launched"@en ;
  schema:startDate "2011" ;
  schema:about <#schema-org> ;
  schema:description "Google, Microsoft, Yahoo, and Yandex launch Schema.org, providing a shared vocabulary for structured data on web pages that now covers over 90% of the web."@en .

<#t-2017> a schema:Event ;
  schema:name "Semantic Web Layer Cake tweak published"@en ;
  schema:startDate "2017-07-13" ;
  schema:about <#layer-cake>, <#source-layer-cake> ;
  schema:description "Kingsley Idehen publishes a corrected Semantic Web Layer Cake that removes the overarching XML misconception and maps each layer to explicit business benefits."@en .

<#t-2022> a schema:Event ;
  schema:name "LLMs reach mainstream adoption"@en ;
  schema:startDate "2022" ;
  schema:about <#large-language-models> ;
  schema:description "ChatGPT and other large language models achieve breakthrough public adoption, creating the conditions for a generational shift in how humans interact with structured data."@en .

<#t-2025> a schema:Event ;
  schema:name "LLMs identified as generic RDF clients"@en ;
  schema:startDate "2025" ;
  schema:about <#large-language-models>, <#rdf>, <#agentic-web> ;
  schema:description "Idehen publishes 'LLMs as Powerful Generic RDF Clients' and 'The Semantic Web Project Didn't Fail — It Was Waiting for AI,' articulating the symbiosis between LLMs and RDF-based knowledge graphs as the realization of the 2001 vision."@en .

<#t-2026> a schema:Event ;
  schema:name "What's old is new again — the Web's original design is vindicated"@en ;
  schema:startDate "2026" ;
  schema:about <#agentic-web>, <#design-issues>, <#semantic-web> ;
  schema:description "As AI accelerates, the Web's original design — open standards, linked data, machine-readable semantics — is being recognized as the foundation for the Agentic Web era. Business models, not technology, were the obstacle."@en .

# ── Layer Cake Section ──────────────────────────────────────────────
<#layer-cake-section> a schema:ArticleSection ;
  schema:name "The Semantic Web Layer Cake (2017 Tweak)"@en ;
  schema:position 2 ;
  schema:about <#layer-cake> ;
  schema:hasPart <#lc-identifiers>, <#lc-abstract-language>, <#lc-document-types>, <#lc-dictionaries>, <#lc-query>, <#lc-rules>, <#lc-unifying-logic>, <#lc-proof>, <#lc-trust>, <#lc-transmission-security>, <#lc-smart-applications> .

<#lc-identifiers> a schema:ArticleSection ;
  schema:name "Sentence Part Identifiers (URIs/IRIs)"@en ;
  schema:position 1 ;
  schema:description "HTTP URIs serve as globally unique identifiers for entities, distinct from the applications that use them. This is the foundation — without identifiers, nothing else in the stack works."@en .

<#lc-abstract-language> a schema:ArticleSection ;
  schema:name "Abstract Language (RDF)"@en ;
  schema:position 2 ;
  schema:about <#rdf> ;
  schema:description "RDF as a Language provides systematic use of signs, syntax, and semantics. Its power has nothing to do with any specific serialization format."@en .

<#lc-document-types> a schema:ArticleSection ;
  schema:name "Document Types"@en ;
  schema:position 3 ;
  schema:description "RDF can be persisted in multiple notations — Turtle, JSON-LD, RDF/XML, N-Triples — loosely coupled from the abstract model. The 2017 tweak corrects the misconception that XML is mandatory."@en .

<#lc-dictionaries> a schema:ArticleSection ;
  schema:name "Dictionaries (Vocabularies & Ontologies)"@en ;
  schema:position 4 ;
  schema:description "Schema.org, RDFS, OWL, and domain-specific ontologies define entity and relationship types. Defining data distinctly from application code enables reuse."@en .

<#lc-query> a schema:ArticleSection ;
  schema:name "Query (SPARQL)"@en ;
  schema:position 5 ;
  schema:about <#sparql> ;
  schema:description "SPARQL provides declarative DDL and DML on RDF data. It extends SQL rather than replacing it — protecting existing data infrastructure investments."@en .

<#lc-rules> a schema:ArticleSection ;
  schema:name "Rules (SWRL, SPIN, SHACL, R2RML)"@en ;
  schema:position 6 ;
  schema:description "Rules enable reusable reasoning and inference distinct from application code. R2RML transforms relational data into RDF. SHACL validates RDF data entry integrity."@en .

<#lc-unifying-logic> a schema:ArticleSection ;
  schema:name "Unifying Logic"@en ;
  schema:position 7 ;
  schema:description "First-order logic serves as the conceptual schema for data modeling, providing a foundation independent of specific applications for long-term system integration."@en .

<#lc-proof> a schema:ArticleSection ;
  schema:name "Proof"@en ;
  schema:position 8 ;
  schema:description "Proof of Work provides a foundation for Trust. Flexible combinations of multiple factors enable calculated trust rather than binary trust decisions."@en .

<#lc-trust> a schema:ArticleSection ;
  schema:name "Trust"@en ;
  schema:position 9 ;
  schema:description "Verifiable claims about identity, content provenance, and related matters. Trust enables agile solutions that don't compromise privacy while maintaining audit trails."@en .

<#lc-transmission-security> a schema:ArticleSection ;
  schema:name "Transmission Security (Crypto)"@en ;
  schema:position 10 ;
  schema:description "PKI and TLS standards provide over-the-wire protection with built-in cryptography support, enhancing privacy during data transmission."@en .

<#lc-smart-applications> a schema:ArticleSection ;
  schema:name "Smart (Cognitive) Applications and Services"@en ;
  schema:position 11 ;
  schema:description "Built declaratively following the MVC pattern with loose coupling of data models, interaction, and visualization. Relationship-type semantics add cognition through reasoning and inference."@en .

# ── LLM + RDF Client Section ─────────────────────────────────────────
<#llm-rdf-client-section> a schema:ArticleSection ;
  schema:name "LLMs as the Generic RDF Client"@en ;
  schema:position 3 ;
  schema:about <#large-language-models>, <#rdf>, <#agentic-web> ;
  schema:hasPart <#mosaic-analogy>, <#four-reasons>, <#examples> .

<#mosaic-analogy> a schema:ArticleSection ;
  schema:name "The Mosaic → Netscape Analogy"@en ;
  schema:position 1 ;
  schema:description "Just as Mosaic and Netscape let users appreciate hypermedia at internet scale, LLMs let users access structured RDF knowledge through natural language — no SPARQL expertise required."@en .

<#four-reasons> a schema:ArticleSection ;
  schema:name "Why LLMs Are Ideal RDF Clients"@en ;
  schema:position 2 ;
  schema:description "Four reasons: (1) translate structured data into natural language insights, (2) harmonize data across disparate sources, (3) ground outputs in RDF KGs to mitigate hallucinations, (4) traverse RDF triples for discovery across knowledge graphs."@en .

<#examples> a schema:ArticleSection ;
  schema:name "Real-World Examples"@en ;
  schema:position 3 ;
  schema:description "Progressive KG creation from note-taking, augmented utility explainers, visually rich HTML generation, RDF vs LPG visualization, and API documentation from natural language prompts."@en .

# ── SemWeb Waiting Section ───────────────────────────────────────────
<#semweb-waiting-section> a schema:ArticleSection ;
  schema:name "The Semantic Web Didn't Fail — It Was Waiting for AI"@en ;
  schema:position 4 ;
  schema:about <#semantic-web>, <#large-language-models> ;
  schema:hasPart <#three-barriers>, <#infrastructure>, <#business-model> .

<#three-barriers> a schema:ArticleSection ;
  schema:name "Three Historical Barriers"@en ;
  schema:position 1 ;
  schema:description "Entity naming (solved by HTTP URLs), entity relationship representation (RDF's flexibility caused format wars), and visualization (needs semantic navigation, not node-edge diagrams). All three now resolved."@en .

<#infrastructure> a schema:ArticleSection ;
  schema:name "Infrastructure Now in Place"@en ;
  schema:position 2 ;
  schema:description "Linked Open Data cloud, Schema.org adoption across 90%+ of web pages, Virtuoso Universal Server, OPAL, and MCP form a distributed, machine-readable Web of Data."@en .

<#business-model> a schema:ArticleSection ;
  schema:name "Five-Step Business Model"@en ;
  schema:position 3 ;
  schema:description "Generate KGs → enrich with schemas → add access control → monetize via pay-per-query → distribute openly. Semantic Web technology paired with AI economics."@en .

# ── What's Old Is New Section ────────────────────────────────────────
<#whats-old-is-new-section> a schema:ArticleSection ;
  schema:name "What's Old Is New Again: Design Issues Revisited"@en ;
  schema:position 5 ;
  schema:about <#design-issues>, <#tim-berners-lee>, <#agentic-web> ;
  schema:description "The Web's fundamental architecture — open standards, linked data, machine-readable semantics — was designed from the start for what AI agents need today. Business-model conflicts, not technology gaps, caused deviations."@en .

# ── FAQ ──────────────────────────────────────────────────────────────
<#faq> a schema:FAQPage ;
  schema:name "Frequently Asked Questions"@en ;
  schema:mainEntity <#q1>, <#q2>, <#q3>, <#q4>, <#q5>, <#q6>, <#q7>, <#q8>, <#q9>, <#q10>, <#q11>, <#q12> .

<#q1> a schema:Question ;
  schema:name "What is the core thesis connecting Semantic Web and LLMs?"@en ;
  schema:acceptedAnswer <#a1> .
<#a1> a schema:Answer ;
  schema:text "LLMs function as the generic RDF client that the Semantic Web always needed. Just as Mosaic/Netscape made HTML accessible to everyone, LLMs make RDF-based structured data accessible through natural language — no SPARQL expertise required."@en .

<#q2> a schema:Question ;
  schema:name "Did the Semantic Web project fail?"@en ;
  schema:acceptedAnswer <#a2> .
<#a2> a schema:Answer ;
  schema:text "No — it was waiting for AI. The infrastructure (LOD cloud, Schema.org, SPARQL, Virtuoso) was built over two decades. LLMs are the missing 'yin' that makes the structured data 'yang' practically usable by non-experts through natural language interfaces."@en .

<#q3> a schema:Question ;
  schema:name "What was the 2001 Scientific American vision?"@en ;
  schema:acceptedAnswer <#a3> .
<#a3> a schema:Answer ;
  schema:text "Berners-Lee, Hendler, and Lassila described software agents that could autonomously schedule appointments, find providers, and negotiate tasks by traversing machine-readable web content. The famous 'Pete and Lucy' scenario showed agents coordinating medical appointments, insurance verification, and schedule matching — all through semantic data."@en .

<#q4> a schema:Question ;
  schema:name "What does 'what's old is new again' mean in this context?"@en ;
  schema:acceptedAnswer <#a4> .
<#a4> a schema:Answer ;
  schema:text "The Web's original design — open standards, linked data, and machine-readable semantics — anticipated the needs of AI agents decades ago. The current AI acceleration is vindicating that architecture. Business-model conflicts, not technology, delayed its realization."@en .

<#q5> a schema:Question ;
  schema:name "How do LLMs reduce AI hallucinations?"@en ;
  schema:acceptedAnswer <#a5> .
<#a5> a schema:Answer ;
  schema:text "By grounding LLM outputs in RDF Knowledge Graphs — which provide verifiable, structured facts with provenance. The KG serves as a 'source of truth' that the LLM can reference, cite, and reason over, significantly reducing fabricated responses."@en .

<#q6> a schema:Question ;
  schema:name "What is the Semantic Web Layer Cake?"@en ;
  schema:acceptedAnswer <#a6> .
<#a6> a schema:Answer ;
  schema:text "A layered architecture diagram showing how Semantic Web standards build on each other: from URIs/IRIs at the base, through RDF abstract language, document types, dictionaries/ontologies, SPARQL query, rules (SWRL/SHACL), unifying logic, proof, trust, and smart applications at the top. The 2017 tweak corrects the XML misconception and adds business benefit annotations."@en .

<#q7> a schema:Question ;
  schema:name "Why is XML not mandatory for the Semantic Web?"@en ;
  schema:acceptedAnswer <#a7> .
<#a7> a schema:Answer ;
  schema:text "RDF is an abstract data model, not tied to any specific serialization. RDF can be expressed in Turtle, JSON-LD, N-Triples, RDF/XML, and other formats. Plain Old Semantic HTML (POSH) also provides viable embedded metadata. The original Layer Cake overstated XML's role."@en .

<#q8> a schema:Question ;
  schema:name "What three barriers did the Semantic Web face?"@en ;
  schema:acceptedAnswer <#a8> .
<#a8> a schema:Answer ;
  schema:text "(1) Entity naming — solved by HTTP URIs/IRIs as globally unique identifiers. (2) Entity relationship representation — RDF's flexibility caused 'format wars' that are now resolved. (3) Visualization — semantic navigation (via AI agents) replaces crude node-edge diagrams."@en .

<#q9> a schema:Question ;
  schema:name "What is the Agentic Web?"@en ;
  schema:acceptedAnswer <#a9> .
<#a9> a schema:Answer ;
  schema:text "An era where AI agents act as intermediaries between humans and complex structured data — automating queries, generating visual outputs, and supporting semantic mashups. It is the practical realization of the 2001 Semantic Web vision, enabled by the LLM + RDF symbiosis."@en .

<#q10> a schema:Question ;
  schema:name "What role does Schema.org play?"@en ;
  schema:acceptedAnswer <#a10> .
<#a10> a schema:Answer ;
  schema:text "Schema.org provides a shared vocabulary for structured data on web pages. Over 90% of web pages now embed schema.org-based RDF metadata, creating a massive, distributed knowledge graph that LLMs can access and reason over."@en .

<#q11> a schema:Question ;
  schema:name "How does SPARQL relate to SQL?"@en ;
  schema:acceptedAnswer <#a11> .
<#a11> a schema:Answer ;
  schema:text "SPARQL extends SQL rather than replacing it. It adds graph query capabilities while protecting existing ODBC/JDBC/ADO.NET investments. This enables mixing and matching 'best of class' applications — the core principle behind Digital Transformation."@en .

<#q12> a schema:Question ;
  schema:name "What is the business model for Semantic Web + AI?"@en ;
  schema:acceptedAnswer <#a12> .
<#a12> a schema:Answer ;
  schema:text "Five steps: generate knowledge graphs from existing data, enrich them with schemas and ontologies, add access control, monetize via pay-per-query, and distribute openly. This creates a virtuous cycle where structured data becomes more valuable as it is reused."@en .

# ── Glossary ─────────────────────────────────────────────────────────
<#glossary> a schema:DefinedTermSet ;
  schema:name "Glossary of Key Terms"@en ;
  schema:hasDefinedTerm
    <#semantic-web>,
    <#large-language-models>,
    <#rdf>,
    <#linked-data>,
    <#agentic-web>,
    <#layer-cake>,
    <#design-issues>,
    <#sparql>,
    <#posh>,
    <#mcp> .

# ── HowTo: Leverage Semantic Web + LLM Symbiosis ─────────────────────
<#howto> a schema:HowTo ;
  schema:name "How to Leverage the Semantic Web + LLM Symbiosis"@en ;
  schema:description "A practical guide for organizations to harness the symbiosis between structured RDF knowledge graphs and large language models to build agentic AI applications."@en ;
  schema:step <#step1>, <#step2>, <#step3>, <#step4>, <#step5>, <#step6>, <#step7> .

<#step1> a schema:HowToStep ;
  schema:name "Generate Knowledge Graphs from existing data"@en ;
  schema:position 1 ;
  schema:text "Use RDF-based tools to convert existing structured data (CSV, SQL, JSON) into RDF knowledge graphs. Leverage R2RML for relational-to-RDF mapping and RDF spongers for unstructured content. Start with Schema.org vocabulary as the baseline."@en .

<#step2> a schema:HowToStep ;
  schema:name "Enrich with schemas, ontologies, and cross-references"@en ;
  schema:position 2 ;
  schema:text "Layer domain-specific ontologies on top of Schema.org. Add owl:sameAs links to DBpedia, Wikidata, and other authoritative sources. Use SKOS for concept schemes and defined term sets. The richer the semantics, the more powerful the LLM interaction."@en .

<#step3> a schema:HowToStep ;
  schema:name "Deploy a SPARQL endpoint"@en ;
  schema:position 3 ;
  schema:text "Host the knowledge graph behind a SPARQL endpoint using Virtuoso or similar RDF-capable database. Ensure the endpoint is accessible via standard HTTP protocols with appropriate authentication and CORS settings for LLM agent access."@en .

<#step4> a schema:HowToStep ;
  schema:name "Connect LLMs via MCP or direct SPARQL"@en ;
  schema:position 4 ;
  schema:text "Use the Model Context Protocol (MCP) to give LLM clients structured access to the knowledge graph. Alternatively, implement direct SPARQL query capabilities in the LLM tool chain. The key is enabling structured, verifiable data retrieval alongside the LLM's generative capabilities."@en .

<#step5> a schema:HowToStep ;
  schema:name "Implement RAG with KG grounding"@en ;
  schema:position 5 ;
  schema:text "Use Retrieval-Augmented Generation (RAG) where the retrieval corpus is the RDF knowledge graph — not raw documents. This provides structured, provenance-tracked facts to the LLM, significantly reducing hallucinations compared to vector-only RAG approaches."@en .

<#step6> a schema:HowToStep ;
  schema:name "Add access control and monetization"@en ;
  schema:position 6 ;
  schema:text "Implement WebID-based authentication and attribute-based access control (ABAC) on the SPARQL endpoint. Enable pay-per-query monetization for premium data access. Use the Linked Data Platform (LDP) for read-write knowledge graph interaction."@en .

<#step7> a schema:HowToStep ;
  schema:name "Build agentic applications"@en ;
  schema:position 7 ;
  schema:text "Create AI agents that combine LLM natural language understanding with structured KG querying. The agents should translate user intent into SPARQL/SQL/GraphQL, retrieve verifiable results, and present them with provenance. The 2001 Pete and Lucy scenario is now implementable."@en .
