Friday, June 24, 2016 - 4:38pm
Following our successful meetings in Europe & US our next DBpedia meeting will be held at Leipzig on September 15th, co-located with SEMANTiCS.
* Highlights *
– Keynote by Lydia Pintscher, Wikidata
– A session for the “DBpedia references and citations challenge”
– A session on DBpedia ontology by members of the DBpedia ontology committee
– Tell us what cool things you do with DBpedia: https://goo.gl/AieceU
– As always, there will be tutorials to learn about DBpedia
* Quick facts *
– Web URL: http://wiki.dbpedia.org/meetings/Leipzig2016
– Hashtag: #DBpediaLeipzig
– When: September 15th, 2016
– Where: University of Leipzig, Augustusplatz 10, 04109 Leipzig
– Call for Contribution: submission form
– Registration: Free to participate but only through registration (Option for DBpedia support tickets)
* Sponsors and Acknowledgments *
– Institute for Applied Informatics (InfAI)
– SEMANTICS Conference (Sep 12-15, 2016 in Leipzig)
If you would like to become a sponsor for the 7th DBpedia Meeting, please contact the DBpedia Association (firstname.lastname@example.org).
* Organisation *
– Magnus Knuth, HPI, DBpedia German/Commons
– Monika Solanki, University of Oxford, DBpedia Ontology
– Julia Holze, DBpedia Association
– Dimitris Kontokostas, AKSW/KILT, DBpedia Association
– Sebastian Hellmann, AKSW/KILT, DBpedia Association
Your DBpedia Association
Tuesday, June 7, 2016 - 12:04pm
In the latest release (2015-10) DBpedia started exploring the citation and reference data from Wikipedia and we were pleasantly surprised by the rich data we managed to extract.
This data holds huge potential, especially for the Wikidata challenge of providing a reference source for every statement. It describes not only a lot of bibliographical data, but also a lot of web pages and many other sources around the web.
The data we extract at the moment is quite raw and can be improved in many different ways. Some of the potential improvements are:
- Extend the citation extractor to handle other Wikipedia language editions; currently only English Wikipedia is supported.
- Map the data to a relevant Bibliographic ontology (there are many candidates and, although BIBO got most votes, we are open to other ontologies)
- Map the data to existing Bibliographic LOD (eg TEL has 100M records, Worldcat 300M) or online books (eg Google Books). See the citationIri issue.
- Ways to merge / fuse identical citations from multiple articles
- Use the citation data in the Wikidata primary sources tool
- Surprise us with your ideas!
We welcome contributions that improve the existing citation dataset in any way; and we are open to collaboration and helping. Results will be presented at the next DBpedia meeting: 15 September 2016 in Leipzig, co-located with SEMANTiCS 2016. Each participant should submit a short description of his/her contribution by Monday 12 September 2016 and present his/her work at the meeting. Comments, questions can be posted on the DBpedia discussion & developer lists or in our new DBpedia ideas page.
Submissions will be judged by the Organizing Committee and the best two will receive a prize.
- Vladimir Alexiev, Ontotext and DBpedia BG
- Anastasia Dimou, Ghent University, iMinds
- Dimitris Kontokostas, KILT/AKSW, DBpedia Association
Your DBpedia Association
Thursday, June 2, 2016 - 9:14am
DBpedia will be part of the 19th International Conference on Business Information Systems (6-8 July 2016) at the University of Leipzig. The conference addresses a wide scientific community and experts involved in the development of business computing applications.The three-day conference program is a mix of workshops, tutorials and paper sessions. Following, you will find more information about the DBpedia tutorial:
Wednesday, July 6th, 2016
DBpedia Tutorial on Semantic Knowledge Integration in established Data (IT) Environments
Enriching data with a semantic layer and linking entities is key to what is loosely called Smart Data. An easy, yet comprehensive way of achieving this is the use of Linked Data standards.
In this DBpedia tutorial, we will introduce
- the basic ideas of Linked Data and other Semantic Web standards
- existing open datasets that can be freely reused (including DBpedia of course)
- software and services in the DBpedia infrastructure such as the DBpedia SPARQL service, the lookup service and the DBpedia Spotlight Entity Linking service
- common business use cases that will help to apply the learned lessons into practice
- integration example into a hypothetical environment
In particular, we would like to show how to seamlessly integrate Linked Data technologies into existing IT- and data-environments and discuss how to link private corporate data knowledge graphs to DBpedia and Linked Open Data. Another special focus is on finding links in text and unstructured data.
2 x 90 minutes (half day)
- Practitioners that would like to learn about linked data and take home the know-how to apply it in their organisation
- Researchers and students that would like to use linked data in their research
The tutorial is held by core members of the DBpedia Association and members of the AKSW/KILT research group in the context of three large research projects:
Your DBpedia Association
Tuesday, April 26, 2016 - 10:44am
DBpedia participated for a fourth time in the Google summer of code program. This was a quite competitive year (like every year) where more than fourty students applied for a DBpedia project. In the end, 8 great students from all around the world were selected and will work on their projects during the summer. Here’s a detailed list of the projects:
A Hybrid Classifier/Rule-based Event Extractor for DBpedia Proposal by Vincent Bohlen
In modern times the amount of information published on the internet is growing to an immeasurable extent. Humans are no longer able to gather all the available information by hand but are more and more dependent on machines collecting relevant information automatically. This is why automatic information extraction and in especially automatic event extraction is important. In this project I will implement a system for event extraction using Classification and Rule-based Event Extraction. The underlying data for both approaches will be identical. I will gather wikipedia articles and perform a variety of NLP tasks on the extracted texts. First I will annotate the named entities in the text using named entity recognition performed by DBpedia Spotlight. Additionally I will annotate the text with Frame Semantics using FrameNet frames. I will then use the collected information, i.e. frames, entities, entity types, with the aforementioned two different methods to decide if the collection is an event or not. Mentor: Marco Fossati (SpazioDati)
Automatic mappings extraction by Aditya Nambiar
DBpedia currently maintains a mapping between Wikipedia info-box properties to the DBpedia ontology, since several similar templates exist to describe the same type of info-boxes. The aim of the project is to enrich the existing mapping and possibly correct the incorrect mapping’s using Wikidata.
Several wikipedia pages use Wikidata values directly in their infoboxes. Hence by using the mapping between Wikidata properties and DBpedia Ontology classes along with the info-box data across several such wiki pages we can collect several such mappings. The first phase of the project revolves around using various such wikipedia templates , finding their usages across the wikipedia pages and extracting as many mappings as possible.
In the second half of the project we use machine learning techniques to take care of any accidental / outlier usage of Wikidata mappings in wikipedia. At the end of the project we will be able to obtain a correct set of mapping which we can use to enrich the existing mapping. Mentor: Markus Freudenberg (AKSW/KILT)
Combining DBpedia and Topic Modelling by wojtuch
DBpedia, a crowd- and open-sourced community project extracting the content from Wikipedia, stores this information in a huge RDF graph. DBpedia Spotlight is a tool which delivers the DBpedia resources that are being mentioned in the document.
Using DBpedia Spotlight to extract Named Entities from Wikipedia articles and then applying a topic modelling algorithm (e.g. LDA) with URIs of DBpedia resources as features would result in a model, which is capable of describing the documents with the proportions of the topics covering them. But because the topics are also represented by DBpedia URIs, this approach could result in a novel RDF hierarchy and ontology with insights for further analysis of the emerged subgraphs.
The direct implication and first application scenario for this project would be utilizing the inference engine in DBpedia Spotlight, as an additional step after the document has been annotated and predicting its topic coverage. Mentor: Alexandru Todor (FU Berlin)
DBpedia Lookup Improvements by Kunal.Jha
DBpedia is one of the most extensive and most widely used knowledge base in over 125 languages. DBpedia Lookup is a tool that allows The DBpedia Lookup is a web service that allows users to obtain various DBpedia URIs for a given label (keywords/anchor texts). The service provides two different types of search APIs, namely, Keyword Search and Prefix Search. The lookup service currently returns the query results in XML (default) and JSON formats and works on English language. It is based on a Lucene Index providing a weighted label lookup, which combines string similarity with a relevance ranking in order to find the most relevant matches for a given label. As a part of the GSOC 2016, I propose to implement improvisations with an intention to make the system more efficient and versatile. Mentor: Axel Ngonga (AKSW)
This project aims at finding mappings between the classes (eg. dbo:Person, dbo:City) in the DBpedia ontology and infobox templates on pages of Wikipedia resources using machine learning. Mentor: Nilesh Chakraborty (University of Bonn)
This project is about integrating RML in the Dbpedia extraction framework. Dbpedia is derived from Wikipedia infoboxes using the extraction framework and mappings defined using the wikitext syntax. A next step would be replacing the wikitext defined mappings with RML. To accomplish this, adjustments will have to be made to the extraction framework. Mentor: Dimitris Kontokostas (AKSW/KILT)
The List Extractor by FedBai
The project focuses on the extraction of relevant but hidden data which lies inside lists in Wikipedia pages. The information is unstructured and thus cannot be easily used to form semantic statements and be integrated in the DBpedia ontology. Hence, the main task consists in creating a tool which can take one or more Wikipedia pages with lists within as an input and then construct appropriate mappings to be inserted in a DBpedia dataset. The extractor must prove to work well on a given domain and to have the ability to be expanded to reach generalization. Mentor: Marco Fossati (SpazioDati)
The Table Extractor by s.papalini
Wikipedia is full of data hidden in tables. The aim of this project is to exploring the possibilities of take advantage of all the data represented with the appearance of tables in Wiki pages, in order to populate the different versions of DBpedia with new data of interest. The Table Extractor has to be the engine of this data “revolution”: it would achieve the final purpose of extract the semi structured data from all those tables now scattered in most of the Wiki pages. Mentor: Marco Fossati (SpazioDati)
At the begining of September 2016 you will receive news about successfull Google Summer of Code 2016 student projects. Stay tuned and follow us on facebook, twitter or visit our website for the latest news.
Your DBpedia Association
Friday, April 1, 2016 - 10:42am
We proudly present our new 2015-10 DBpedia release, which is abailable now via: http://dbpedia.org/sparql. Go an check it out!
This DBpedia release is based on updated Wikipedia dumps dating from October 2015 featuring a significantly expanded base of information as well as richer and cleaner data based on the DBpedia ontology.
So, what did we do?
The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2015-10 ontology encompasses
- 739 classes (DBpedia 2015-04: 735)
- 1,099 properties with reference values (a/k/a object properties) (DBpedia 2015-04: 1,098)
- 1,596 properties with typed literal values (a/k/a datatype properties) (DBpedia 2015-04: 1,583)
- 132 specialized datatype properties (DBpedia 2015-04: 132)
- 407 owl:equivalentClass and 222 owl:equivalentProperty mappings external vocabularies (DBpedia 2015-04: 408 and 200, respectively)
The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2015-10 extraction, we used a total of 5553 template mappings (DBpedia 2015-04: 4317 mappings). For the first time the top language, gauged by number of mappings, is Dutch (606 mappings), surpassing the English community (600 mappings).
And what are the (breaking) changes ?
- English DBpedia switched to IRIs from URIs.
- The instance-types dataset is now split to two files:
- “instance-types” contains only direct types.
- “Instance-types-transitive” contains transitive types.
- The “mappingbased-properties” file is now split into three (3) files:
- “mappingbased-literals” contains mapping based statements with literal values.
- We added a new extractor for citation data.
- All datasets are available in .ttl and .tql serialization
- We are providing DBpedia as a Docker image.
- From now on, we provide extensive dataset metadata by adding DataIDs for all extracted languages to the respective language directories.
- In addition, we revamped the dataset table on the download-page. It’s created dynamically based on the DataID of all languages. Likewise, the tables on the statistics- page are now based on files providing information about all mapping languages.
- From now on, we also include the original Wikipedia dump files(‘pages_articles.xml.bz2’) alongside the extracted datasets.
- A complete changelog can always be found in the git log.
And what about the numbers?
Altogether the new DBpedia 2015-10 release consists of 8.8 billion (2015-04: 6.9 billion) pieces of information (RDF triples) out of which 1.1 billion (2015-04: 737 million) were extracted from the English edition of Wikipedia, 4.4 billion (2015-04: 3.8 billion) were extracted from other language editions, and 3.2 billion (2015-04: 2.4 billion) came from DBpedia Commons and Wikidata. In general we observed a significant growth in raw infobox and mapping-based statements of close to 10%. Thorough statistics are available via the Statistics page.
And what’s up next?
We will be working to move away from the mappings wiki but we will have at least one more mapping sprint. Moreover, we have some cool ideas for GSOC this year. Additional mentors are more than welcome.
And who is to blame for the new release?
We want to thank all editors that contributed to the DBpedia ontology mappings via the Mappings Wiki, all the GSoC students and mentors working directly or indirectly on the DBpedia release and the whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
Special thanks go to Markus Freudenberg and Dimitris Kontokostas (University of Leipzig), Volha Bryl (University of Mannheim / Springer), Heiko Paulheim (University of Mannheim), Václav Zeman and the whole LHD team (University of Prague), Marco Fossati (FBK), Alan Meehan (TCD), Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy), Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software), OpenLink Software (http://www.openlinksw.com/), Ruben Verborgh from Ghent University – iMinds, Ali Ismayilov (University of Bonn), Vladimir Alexiev (Ontotext) and members of the DBpedia Association, the AKSW and the department for Business Information Systems of the University of Leipzig for their committment in putting tremendous time and effort to get this done.
The work on the DBpedia 2015-10 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering (http://aligned-project.eu/).
Have fun and all the best!
Have you backlinked your data yet? – A retrospective of the 6th DBpedia community meeting in The Hague
Wednesday, March 23, 2016 - 2:09pm
We thought it was about time to go orange again, meet the Dutch DBpedia Chapter and to meet and celebrate the growing dutch DBpedia community. Thus, following our successful US-event past November, the National Library of the Netherlands hosted the 6th DBpedia community meeting in The Hague on February 12th.
First and foremost, we would like to thank TNO for organizing the pre-event and the National Library of the Netherlands, especially Menno Rasch (Director of KB operations), for sponsoring the catering during the DBpedia community meeting.
Before diving into DBpedia topics, we had a welcome reception on February 11th with snacks and drinks at TNO – New Babylon. Around 40 people from the DBpedia community, members from TNO and its Data Science Department and representatives from the Platform Linked Data Netherlands engaged in vital exchanges about Linked Data topics in the Netherlands.
Sebastian Hellmann gave a short introduction about DBpedia and the recently found DBpedia Association. After Jean-Louis Roso talked about the TNO Data Science Department and current developments and projects, Erwin Folmer presented the platform Linked Data Netherlands (PiLOD).
A poster and demo session right after gave people from TNO the opportunity to present and discuss projects currently carried out at TNO.
Following, you find a short list of poster-presentation during the pre-event:
- The Smart Appliances REFerence ontology (SAREF)Standardization in IoT
- Linked Data in Horticulture
- GOOSE: Semantic search in Image Retrieval
- Logistics: Ontologies for the Physical Internet
- SWELL: Smart Reasoning for Well Being
The following social gathering with snacks and drinks, encouraged talks about current developments in the DBpedia community and about ongoing projects. According to TNO representative Laura Daniel, the pre-event was very successful. She summarized the evening of the welcome reception: “It was very inspiring to see the DBpedia community in action. There were lots of interesting projects that use DBpedia as well as lively discussions on the challenges faced by the community, and of course, the event was a great opportunity for networking!”
Following the pre-event, the main event attracted 95 participants and featured special session dedicated to the DBpedia showcases, the DBpedia ontology and challenges of DBpedia and Digital Heritage.
During the opening session, Menno Rasch, host of the meeting and Director of KB operations, highlighted the importance to raise awareness of the DBpedia brand in order to build a DBpedia community.
The newly found DBpedia Association and the related new charter regulating organizational issues in the DBpedia community was one of the focuses during the early morning hours, right before several interesting keynote presentations opened the discussion about DBpedia and its usage in the Netherlands.
Marco de Niet, representative of Digital Heritage Foundation (DEN Foundation), the Dutch knowledge centre for digital heritage, talked about “the National Strategy for Digital Heritage in the Netherlands”.
Marco Brattinga and Arjen Santema from the Land Registry and Mapping Agency (Kadaster) presented a framework to describe the data and metadata in registration in relation to a concept schema that describes what the registration is about. Apart from the ideas behind the framework, their presentation included a showcase of examples from the cadastral registration as well as the topographic map and the information node addresses and buildings.
The morning session was closes by Paul Groth, from Elsevier giving a presentation about knowledge graph construction and the Role of DBPedia and other Wikipedia based knowledge. He discussed the importance of structured data as key to coordinate data in order to build better taxonomies. He also pointed towards the importance of having an updated publicly available knowledge graph as a reference for constructing internal knowledge graphs.
After Lunch Track
DBpedia is one of the biggest and most important focal point of the Linked Open Data movement. Thus, the after-lunch-track focused very much on DBpedia Usages during the dedicated showcase session, which started with the new DBpedia & DBpedia+ Data Stack release (planned for 2016-04).
Afterwards, the session continued with further DBpedia related discussions, in which various practical DBpedia matters such as DBpedia in the EUROPEANA Food and Drink project, the use of DBpedia for improved vaccine information systems or using Elasticsearch + DBpedia to maintain a searchable database of global power plants were tackled.
The afternoon track came along with four DBpedia highlight-sessions, namely DBpedia and Ontologies, DBpedia and Heritage, DBpedia hands-on development and DBpedia and NLP. Firstly, the DBpedia ontology group discussed possible ontology usages and presented the results of the latest DBpedia Ontology survey. In the following 75 minutes during the DBpedia and Heritage session, special challenges and opportunities of reference data for digital heritage were addressed by experts from EUROPEANA, iMinds, RCE and KB, the National Library of the Netherlands. Thirdly, members of the DBpedia Association and the AKSW/KILT group from Leipzig led a practical session for developers and DBpedia enthusiasts to talk about technical issues and challenges in DBpedia as well as they held a Tutorial session for DBpedia Newbies.
The end of the event was dedicated to NLP and the application of Linked Data on Language Technologies, especially entity linking, topics which are of vital importance for the research of AKSW/KILT members at the University of Leipzig.
Following, you find a list of all presentations given during the meeting.
- Sebastian Hellmann, DBpedia Association AKSW/KILT – Have you Backlinked your Data yet?
- Marco de Niet, DEN Foundation – Digital Heritage in the Netherlands
- Marco Brattinga and Arjen Santema, Land Registry and Mapping Agency (Kadaster) – Keynote #1:
- Paul Groth, Elsevier – Knowledge Graph Construction and the Role of DBPedia
- Antoine Isaac, Europeana – Enriching Cultural Heritage Data with DBpedia
- Patrik Schneider, Siemens and WU Wien – DBpedia Wayback Machine
- Richard Nagelmaeker, – BlueSky – Knowledge Diviner – DBpedia demo
- Laura Daniele, TNO – GOOSE
- Christina Unger, CITEC – DBlexipedia: A nucleus for a multilingual lexical Semantic Web
- Raphael Boyer, DBpedia FR / INRIA – DBpedia Historic data
- Chris Davis – Using Elasticsearch + DBpedia to maintain a searchable database of global power plants
- Ali Khalili – Linked Data Reactor
- Vladimir Alexiev, Ontotext – Using DBPedia in Europeana Food and Drink
- Nilesh Chakraborty, AKSW/KILT – FREME – Open Framework of e-Services for Multilingual and Semantic Enrichment of Digital Content.
- Monika Solanki,University of Oxford – Using DBpedia for improved Vaccine Information Systems
- Ralph Schäfermeier and Alexandru Todor, FU Berlin – WebProtégé demo & aspect oriented programming
- Gerard Kuys / Ordina – Classification Ontology
- Vladimir Alexiev / Ontotext – DBpedia mappings quality problems
- Enno Meijers, Dutch DBpedia – DBpedia & Heritage: Challenges and opportunities of reference data for digital heritage
- Hugo Manguinhas, Europeana – Building an ecosystem of networked references
- Anastasia Dimou, iMinds – RML – generating high quality Linked Data
- Joop Vanderheiden, RCE – Histograph: geocoding places of the pas
- Olaf Janssen,KB – Illegal newspapers in the WOII” Wikipedia/DBpedia project
- Christina Unger, CITEC – Towards a Linguistic Linked Data Ecosystem (Results of the LIDER project)
- Giuseppe Futia: TellMeFirst – TellMeFirst A Knowledge Domain Discovery Framework
- Chris Davis – Mapping the Bio-economy using DBpedia Spotlight
Summing up, the 6th community meeting brought together more than 95 DBpedia enthusiast from the Netherlands and Europe which engaged in vital conversations about interesting projects and approaches to questions/problems revolving around DBpedia, not only during the dedicated session but also during networking breaks. The recently found DBpedia Association was strongly represented with presentations from Sebastian Hellmann, Dimitris Kontokostas, Nilesh Chakraborty, as well as Markus Freudenberg.
Finally, we would like to thank the organizers Enno Meijers, Richard Nagelmaker, Gerald Wildenbeest, Gerard Kuys, Monika Solanki and representatives of the DBpedia Association such as Dimitris Kontokostas and Sebastian Hellmann for devoting their time to the organization of the meeting and the programme. We are now looking forward to the 7th DBpedia Community Meeting, which will be held in the city of Leipzig again, during the Semantics conference in September 15th, 2016.
Tuesday, February 9, 2016 - 1:24pm
3 more days to go…
until we finally meet again for our next DBpedia Community Meeting, which is hosted by the National Library of the Netherlands in the Hague on February 12th. One day before we will have a welcome reception (5-8pm) with snacks and drinks at TNO – New Babylon.
Only 15 seats are left for the next DBpedia Community Meeting, so come and get your ticket to be part of this event.
The 6th edition of this event covers a discussion about the Dutch DBpedia becoming the first chapter with institutional support of the new DBpedia as well as a session on the DBpedia ontology by members of the newly found DBpedia working group. On top we will have a DBpedia showcase session on DBpedia+ Data Stack 2015-10 – Release, Quality control in DBpedia as well as presentations about the LIDER and Goose project. And as usual, our event features a dev and tutorial session to learn about DBpedia.
Experts in the field of semantic technologies from Elsevier and the dutch Land Registry and Mapping Agency, as well as the Europeana project and the DEN foundation will speak about topics such as Digital Heritage in the Netherlands and Knowledge Graph Construction and the Role of DBpedia.
Attending the DBpedia Community meeting is free, but you need to register here. Optionally, in case you like to support DBpedia with a little more than your presence during the event, you can choose a DBpedia support ticket. Have a look here:
We would like to thank the following organizations for sponsoring and supporting our endeavour.
- National Library of the Netherlands (http://www.kb.nl)
- ALIGNED Project (http://aligned-project.eu/)
- Institute for Applied Informatics (InfAI, http://infai.org/en/AboutInfAI )
- OpenLink Software (http://www.openlinksw.com/ )
- SEMANTiCS Conference Sep 12-15, 2016 in Leipzig (http://2016.semantics.cc )
- TNO – New Babylon (https://www.tno.nl/en/about-tno/locations/locatie-den-haag-new-babylon/lid12138/)
Wednesday, January 27, 2016 - 7:21pm
The submission deadline for mentoring organizations to submit their application for the 2016 Google Summer of Code is approaching quickly. As DBpedia is again planning to be a vital part of the Mentoring Summit, we like to take that opportunity to give you a little recap of the projects mentored by DBpedia members during the past GSoC, in November 2015.
Dimitris Kontokostas, Marco Fossati, Thiago Galery, Joachim Daiber and Reuben Verborgh, members of the Dbpedia community, mentored 8 great students from around the world. Following are some of the projects they completed.
Fact Extraction from Wikipedia Text by Emilio Dorigatti
DBpedia is pretty much mature when dealing with Wikipedia semi-structured content like infoboxes, links and categories. However, unstructured content (typically text) plays the most crucial role, due to the amount of knowledge it can deliver, and few efforts have been carried out to extract structured data out of it. Marco and Emilio built a fact extractor, which understands the semantics of a sentence thanks to Natural Language Processing (NLP) techniques. If you feel playful, you can download the produced datasets. For more details, check out this blog post. P.S.: the project has been cited by Python Weekly and Python Trending! Mentor: Marco Fossati (SpazioDati)
Better context vectors for disambiguation by Philipp Dowling
Better Context Vectors aimed to improve the representation of context used by DBpedia Spotlight by incorporating novel methods from distributional semantics. We investigated the benefits of replacing a word-count based method for one that uses a model based on word2vec. Our student, Philipp Dowling, implemented the model reader based on a preprocessed version of Wikipedia (leading to a few commits to the awesome library gensim) and the integration with the main DBpedia Spotlight pipeline. Additionally, we integrated a method for estimating weights for the different model components that contribute to disambiguating entities. Mentors: Thiago Galery (Analytyca), Joachim Daiber (Amsterdam Univ.), David Przybilla (Idio)
Wikipedia Stats Extractor by Naveen Madhire
Wikipedia Stats Extractor aimed to create a reusable tool to extract raw statistics for Name Entity Linking out of a Wikipedia dump. Naveen built the project on top of Apache Spark and Json-wikipedia which makes the code more maintainable and faster than its previous alternative (pignlproc). Furthermore Wikipedia Stats Extractor provides an interface which makes easier the task of processing Wikipedia dumps for purposes other than Entity Linking. Extra changes were made in the way surface forms stats are extracted and lots of noise was removed, both of which should in principle help Entity Linking.
Special regards to Diego Ceccarelli who gave us great insight on how Json-wikipedia worked. Mentors: Thiago Galery (Analytyca), Joachim Daiber (Amsterdam Univ.), David Przybilla (Idio)
DBpedia Live extensions by Andre Pereira
DBpedia Live provides near real-time knowledge extraction from Wikipedia. As wikipedia scales we needed to move our caching infrastructure from MySQL to MongoDB. This was the first task of Andre’s project. The second task was the implementation of a UI displaying the current status of DBpedia Live along with some admin utils. Mentors: Dimitris Kontokostas (AKSW/KILT), Magnus Knuth (HPI)
Adding live-ness to the Triple Pattern Fragments server by Pablo Estrada
DBpedia currently has a highly available Triple Pattern Fragments interface that offloads part of the query processing from the server into the clients. For this GSoC, Pablo developed a new feature for this server so it automatically keeps itself up to date with new data coming from DBpedia Live. We do this by periodically checking for updates, and adding them to an auxiliary database. Pablo developed smart update, and smart querying algorithms to manage and serve the live data efficiently. We are excited to let the project out in the wild, and see how it performs in real-life use cases. Mentors: Ruben Verborgh (Ghent Univ. – iMinds) and Dimitris Kontokostas (AKSW/KILT)
Registration for mentors @ GSoC 2016 is starting next month and DBpedia would of course try to participate again. If you want to become a mentor or just have a cool idea that seems suitable, don’t hesitate to ping us via the DBpedia discussion or developer mailing lists.
Your DBpedia Association
Friday, January 15, 2016 - 4:03pm
A belated Happy New Year to all DBpedia enthusiasts !!!
Two weeks of 2016 have already passed and it is about time to reflect on the past three months which were revolving around the 5th DBpedia meeting in the USA.
After 4 successful meetings in Poznan, Dublin, Leipzig and Amsterdam, we thought it is about time to cross the Atlantic and meet the US-part of the DBpedia community. On November 5th 2015, our 5th DBpedia Community meeting was held at the world famous Stanford University, in Palo Alto California.
First and foremost, we would like to thank Michel Dumontier, Associate Professor of Medicine at Stanford University, and his Laboratory for Biomedical Knowledge Discovery for hosting this great event and giving so many US-based DBpedia enthusiasts a platform for exchange and to meet in person. The event was constantly commented on and discussed not just inside University premises but also online, via Twitter #DBpedia CA. We would also like to thank the rest of the organizers: Pablo Mendes, Marco Fossati, Dimitris Kontokostas and Sebastian Hellmann for devoting a lot of time to plan the meeting and coordinate with the presenters.
We set out to the US with two main goals. Firstly, we wanted DBpedia and Knowledge Graph professionals and enthusiasts to network and discuss ideas about how to improve DBpedia. Secondly, the event also aimed at finding new partners, developers and supporters to help DBpedia grow professionally, in terms of competencies and data, as well as to enlarge the DBpedia community itself to spread the word and to raise awareness of the DBpedia brand.
Therefore, we invited representatives of the best-known actors in the Data community such as:
- Michel Dumontier, Stanford
- Anshu Jain, IBM Watson
- Nicolas Torzec, Yahoo!
- Yves Raimond, Netflix
- Karthik Gomadam, Independent
- Joakim Soderberg, Blippar
- Alkis Simitsis, HP Labs
- Yashar Mehdad, Yahoo! Labs
…who addressed interesting topics and together with all the DBpedia enthusiasts engaged in productive discussion and raised controversial questions.
The meeting itself was co-located with an pre-event designed as workshop, giving the attending companies a lot of room and time to raise questions and discuss “hot topics”. Classification schemas and multilingualism have been on top of the list of topics that were most interesting for the companies invited. In this interactive setting, our guest from Evernote, BlippAR, World University and Wikimedia answered questions about the DBpedia ontology and mappings, Wikipedia categories as well as about similarities and differences with Wikidata.
Following the pre-event, the main event attracted attendees with lightning talks from major companies interesting to the DBpedia community.
The host of the DBpedia Meeting, Michel Dumontier from Stanford opened the main event with a short introduction of his group’s focus in biomedical data. He and his group currently focus on integrating datasets to extract maximal value from data. Right in the beginning of the DBpedia meeting, Dumontier highlighted the value of already existing yet unexploited data out there.
During the meeting there have been two main thematic foci, one concerning the topics companies were interested in and raised during the session. Experts from Yahoo, Netflix, Diffbot, IBM Watson and Unicode addressed issue such as fact extraction from text via NLP, knowledge base construction techniques and recommender systems leveraging data from a knowledge base and multilingual abbreviation datasets.
The second focus of this event revolved around DBpedia and encyclopedic Knowledge Graphs including augmented reality addressed by BlippAR and by Nuance. We have some of the talks summed up for you here. Also check out the slides provided in addition to the summary of some talks to get a deeper insight into the event.
Nicolas Torzec, Yahoo! – Wikipedia, DBpedia and the Yahoo! Knowledge Graph
He described how DBpedia played a key role in the beginning of the Knowledge Graph effort at Yahoo! They decided on using the Extraction Framework directly, not the provided data dumps, which allowed them to continuously update as Wikipedia changed. Yashar, also from Yahoo! focused on multilingual NE detection and linking. He described how users make financial choices based on availability of products in their local language, which highlights the importance of multilinguality (also a core objective of the DBpedia effort).
Anshu Jain, IBM Watson – Watson Knowledge Graph – DBpedia Meetup
The focus of this presentation was the effort by IBM Watson team their effort as not building a knowledge graph, but building a platform for working with knowledge graphs. For them, graph is just an abstraction, not a data structure. He highlighted that context is very important, and one
Yves Raimond, Netflix – Knowledge Graphs @ NetflixYves Raimond from Netflix observed that in their platform, every impression is a recommendation. They rely on lots of machine learning algorithms, and pondered on the role of knowledge graphs in that setting. Will everything (user + metadata) end up in a graph so that algorithms learn from that?Click here for the complete presentation.
Joakim Soderberg, BlippAR –
Joakim Soderberg mentioned that at Blippar it’s all about the experience. They are focusing on augmented reality, which can benefit from information drawn from many sources including DBpedia.
David Martin, Nuance – using DBpedia with Nuance
David Martin from Nuance talked about how DBpedia is used as a source of named entities. He observes that multi role ranking is an important issue, for instance, the difference in the role of Arnold Schwarzenegger as politician or actor. Click here for the complete presentation.
Karthik Gomadam, Accenture Technology Labs – Rethinking the Enterprise Data Stack
Karthik Gomadam discussed data harmonization in the context of linked enterprise data.
Alkis Simitsis, Hewlett Packard – Complex Graph Computations over Enterprise Data
He talked about complex graph computations over enterprise data, while Georgia Koutrika from HP Labs presented their solution for fusing knowledge into recommendations.
Other topics discussed were:
- Steven Loomis, IBM – Automatically extracted abbreviated data with Dbpedia
- Scott McLeod, World University and School – MIT Open Courseware with Wikipedia. Classes in virtual worlds.
- Diffbot’s developers talked about structuring the Web with their product with the help of DBpedia and DBpedia Spotlight.
You find some more presentations here:
Feedback from attendees and via our Twitter stream #DBpediaCA was generally very positive and insightful. The choice of invited talks was appreciated unanimously, and so was the idea of having lightning talks. In the spirit of previous DBpedia Meetings, we allocated time for all attendees that were interested in speaking. Some commented that they would have liked to have more time to ask questions and discuss, while others thought the meeting was too late. We will consider the trade-offs and try to improve in the next iteration. There was strong support from attendees for meeting again as soon as possible!
So now, we are looking forward to the next DBpedia community meeting which will be held on February 12, 2016 in the Hague, Netherlands. So, save the date and visit the event page. We will keep you informed via the DBpedia Website and Blog.
Finally, we would like to thank Yahoo! for sponsoring the catering during the DBpedia community meeting. We would also like to acknowledge Google Summer of Code as the reason Marco and Dimitris were in California and for covering part of their travel expenses.
The event was initiated by the DBpedia association. The following people received travel grants by the DBpedia association: Marco Fossati; Dimitris Kontokostas; Joachim Daiber
Friday, September 4, 2015 - 3:30pm
we are happy to announce the release of DBpedia 2015-04 (also known as: 2015 A). The new release is based on updated Wikipedia dumps dating from February/March 2015 and features an enlarged DBpedia ontology with more infobox to ontology mappings, leading to richer and cleaner data.
The English version of the DBpedia knowledge base currently describes 5.9M things out of which 4.3M resources have abstracts, 452K geo coordinates and 1.45M depictions. In total, 4 million resources are classified in a consistent ontology and consists of 2,06M persons, 682K places (including 455K populated places), 376K creative works (including 92K music albums, 90K films and 17K video games), 188K organizations (including 51K companies and 33K educational institutions), 278K species and 5K diseases. The total number of resources in English DBpedia is 15.3M that, besides the 5.9M resources, includes 1.2M skos concepts (categories), 6.83M redirect pages, 256K disambiguation pages and 1.13M intermediate nodes.
We provide localized versions of DBpedia in 128 languages. All these versions together describe 38.3 million things, out of which 23.8 million are localized descriptions of things that also exist in the English version of DBpedia. The full DBpedia data set features 38 million labels and abstracts in 128 different languages, 25.2 million links to images and 29.8 million links to external web pages; 80.9 million links to Wikipedia categories, and 41.2 million links to YAGO categories. DBpedia is connected with other Linked Datasets by around 50 million RDF links.
In addition we provide DBpedia datasets for Wikimedia Commons and Wikidata.
Altogether the DBpedia 2015-04 release consists of 6.9 billion pieces of information (RDF triples) out of which 737 million were extracted from the English edition of Wikipedia, 3.76 billion were extracted from other language editions and 2.4 billion from DBpedia Commons and Wikidata.
From this release on we will try to provide two releases per year, one in April and the next in October. The 2015-04 release was delayed by 3 months but we will try to keep the schedule and release the 2015-10 at the end of October or early November.
On our plans for the next release is to remove the URI encoding of English DBpedia (dbpedia.org) and switch to IRIs only. This will simplify the release process and will be aligned with all other DBpedia language datasets. We know that this will probably break some links to DBpedia but we feel is the only way to move forward. If you have any reasons against this action, please let us know now.
A complete list of changes in this release can be found on GitHub.
From this release we adjusted the download page folder structure, giving us more flexibility to offer more datasets in the near future
The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2015 ontology encompasses
- 735 classes (DBpedia 2014: 685)
- 1,098 object properties (DBpedia 2014: 1079)
- 1,583 datatype properties (DBpedia 2014: 1,600)
- 132 specialized datatype properties (DBpedia 2014: 116)
- 408 owl:equivalentClass and 200 owl:equivalentProperty mappings external vocabularies
Additional Infobox to Ontology Mappings
The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. There are six new languages with mappings: Arabic, Bulgarian, Armenian, Romanian, Swedish and Ukrainian.
For the DBpedia 2015 extraction, we used a total of 4317 template mappings (DBpedia 2014: 3814 mappings).
Extended Type System to cover Articles without Infobox
Until the DBpedia 3.8 release, a concept was only assigned a type (like person or place) if the corresponding Wikipedia article contains an infobox indicating this type. Starting from the 3.9 release, we provide type statements for articles without infobox that are inferred based on the link structure within the DBpedia knowledge base using the algorithm described in Paulheim/Bizer 2014. For the new release, an improved version of the algorithm was run to produce type information for 400,000 things that were formerly not typed. A similar algorithm (presented in the same paper) was used to identify and remove potentially wrong statements from the knowledge base.
Both of these datasets use a typing system beyond the DBpedia ontology and we provide a subset, mapped to the DBpedia ontology (dbo) and a full one with all types (ext).
New and updated RDF Links into External Data Sources
We updated the following RDF link sets pointing at other Linked Data sources: Freebase, Wikidata, Geonames and GADM.
Accessing the DBpedia 2015-04 Release
You can download the new DBpedia datasets in RDF format from http://wiki.dbpedia.org/Downloads or
Additional external dataset contributions
From the following releases we will provide additional datasets related to DBpedia. For 2015-04 we provide a pagerank dataset for English and German, provided by HPI.
As usual, the new dataset is also published in 5-Star Linked Open Data form and accessible via the SPARQL Query Service endpoint at http://dbpedia.org/sparql and Triple Pattern Fragments service at http://fragments.dbpedia.org/.
Lots of thanks to
- Markus Freudenberg (University of Leipzig) for taking over the whole release process
- Dimitris Kontokostas for conveying his considerable knowledge of the extraction and release process.
- Volha Bryl and Daniel Fleischhacker (University of Mannheim) for their work on the previous release and their continuous support in this release.
- Alexandru Todor (University of Berlin) for contributing time and computing resources for the abstract extraction.
- All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
- The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
- Heiko Paulheim (University of Mannheim) for re-running his algorithm to generate additional type statements for formerly untyped resources and identify and removed wrong statements.
- Václav Zeman and the whole LHD team (University of Prague) for their contribution of additional DBpedia types
- Marco Fossati (FBK) for contributing the DBTax types
- Petar Ristoski (University of Mannheim) for generating the updated links pointing at the GADM database of Global Administrative Areas. Petar will also generate an updated release of DBpedia as Tables soon.
- Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
- Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that provides 5-Star Linked Open Data publication and SPARQL Query Services.
- OpenLink Software (http://www.openlinksw.com/) altogether for providing the SPARQL Query Services and Linked Open Data publishing infrastructure for DBpedia in addition to their continuous infrastructure support.
- Ruben Verborgh from Ghent University – iMinds for publishing the dataset as Triple Pattern Fragments, and iMinds for sponsoring DBpedia’s Triple Pattern Fragments server.
- Magnus Knuth (HPI) for providing a pagerank dataset for English and German
- Ali Ismayilov (University of Bonn) for implementing DBpedia Wikidata dataset.
- Vladimir Alexiev (Ontotext) for leading a successful mapping and ontology clean up effort.
- Nono314 for contributing a lot of improvements and bug fixes in the extraction framework as well as other community members.
- All the GSoC students and mentors working directly or indirectly on the DBpedia release
The work on the DBpedia 2015-04 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering (http://aligned-project.eu/).
Have fun with the new DBpedia 2015-04 release!
Markus Freudenberg, Dimitris Kontokostas, Sebastian Hellmann