We are pleased to announce our keynote speaker for the workshop:
From Redlining to Robots: How newsrooms apply technology to the craft of journalism
Aron Pilhofer, Chair in Journalism Innovation at Temple UniversityAron Pilhofer is the James B. Steele Chair in Journalism Innovation at Temple University. In addition to teaching, his work is focused on new business models, digital transformation and innovation in news. Before joining Temple, Pilhofer was executive editor, digital, and interim chief digital officer at the Guardian in London. There, he led the Guardian's product and technology teams as well as heading visual journalism - including pictures, graphics, interactive and data journalism. Before coming to the Guardian, Aron was associate managing editor for digital strategy and editor of interactive news at The New York Times. He also was a reporter at Gannett newspapers in New Jersey and Delaware, headed data journalism at the Center for Public Integrity in Washington, D.C. and served on the training staff of Investigative Reporters and Editors. Outside the newsroom, Aron co-founded two news-related startups: DocumentCloud.org, now housed at Temple University’s Klein College of Media and Communication, and Hacks & Hackers.
The use of use of technology in newsrooms is nothing new: Journalists have been applying sophisticated data analysis techniques to find and tell stories for at least a half century. In 1972, journalists at the Philadelphia Inquirer borrowed time on a mainframe to land a story about unequal sentencing. In 1989, the Atlanta Journal Constitution's series about redlining won a Pulitzer Prize -- the first for a piece of data journalism. And in the aftermath of Hurricane Andrew in 1992, the Miami Herald used GIS to show how shoddy workmanship — not wind — was likely to blame for much of the damage. Although journalism isn't thought of as a high-tech profession, journalists have been among the earliest adopters of new techniques and technologies to find and report stories. This talk will cover how journalists have embraced technology in the past, and how they might in the future.
This keynote is sponsoered by Bloomberg
Mining Leaks and Open Data to Follow the Money.
Friedrich Lindenberg, Team Lead at OCCPRFriedrich Lindenberg leads the data team at OCCRP. He is responsible for the development of OCCRP Data and supports ongoing investigations where data analysis is needed. In 2014/2015, Friedrich was a fellow with the International Center for Journalists, working with the African Network of Centers for Investigative Reporting (ANCIR), and in 2013 he was a Knight-Mozilla OpenNews fellow at Spiegel Online in Hamburg. Prior to that, Friedrich was an open data activist, and worked to promote the release of government information about public finance, lobbying, procurement and lawmaking across the world.
How can data-driven approaches help to uncover large-scale corruption in government and business?
The Organized Crime and Corruption Reporting Project (OCCRP) is a network of
investigative reporters across 45 countries that uncovers cases of bribery,
theft and money laundering around the world.
To support this, OCCRP has built a unique data resource covering more than a billion entities from over 400 data sources, and a suite of open-source data integration and search tools, the Aleph. This allows us to give investigative reporters visibility into large amounts of evidence, and to perform cross-referencing between databases that uncovers evidence of wrong-doing.
We'll present the why, what and how of this project, and are hoping for feedback from the IR community on what our next steps could be in order to increase search quality and provide better recommendations to our investigators.
The workshop will also feature a discussion panel, combining insights from journalism and information retrieval. The panel will feature (in alphabetical order):
Julio Gonzalo, UNED
Julio Gonzalo is full professor at UNED (Universidad Nacional de Educación a Distancia), where he leads the Information Retrieval and Natural Language Processing research group (http://nlp.uned.es). He has co-organized several comparative evaluation campaigns, such as RepLab (for Online Reputation Management Systems), WePS (Web People Search systems) and iCLEF (for interactive Cross-Language Retrieval tasks). He has been co-recipient of a Google Faculty Research Award for his work on evaluation metrics (together with Stefano Mizzaro and Enrique Amigó). He has recently been keynote speaker for CLEF 2018. His research interests include Entity-Oriented Summarization and Semantic Search, Evaluation Methodologies and Metrics in Information Access, Semantic Textual Similarity, Online Reputation Monitoring and Information Access Technologies for Social Media. His publications have received over 4,600 citations for an h-index of 33 according to Google Scholar ( https://scholar.google.es/citations?user=opFCmpYAAAAJ)
Jochen Leidner, Refinitive and University of Sheffield
Dr. Jochen L. Leidner has been Director of Research, R&D at Thomson Reuters (and
since 10/2018, Refinitiv Ltd., formerly the Financial & Risk division of Thomson
Reuters), the parent company that owns the Reuters News agency and other
professional brands like the Eikon Financial information terminal. He is also a
Royal Academy of Engineering Visiting Professor of Data Analytics at the
University of Sheffield, Guest Lecturer at the University of Zurich and Scientific
Expert for the European Commission.
He is a computer scientist and computational linguist by training. In the past, he held a Royal Society Edinburgh Fellowship at the University of Edinburgh, where he was Principal Investigator on a mobile question answering project as well as founding roles in startups and software development roles at SAP. He holds a Master's degree in computational linguistics, English language and literature and computer science from Friedrich-Alexander-University Erlangen-Nuremberg. In 2002 he obtained a Master of Philosophy in Engineering (MPhil) at the University of Cambridge in Computer Speech, Text and Internet Technologies. In 2007, he obtained a PhD in Informatics with his thesis 'Toponym Resolution in Text', which also won the First ACM SIGIR Doctoral Consortium Award In 2015 and again in 2016, he was twice winner of the Thomson Reuters Inventor of the Year award for the best patent application across the company globally.
He has edited, authored or co-authored over 60 publications and over a dozen patent applications/patents, and he has won multiple awards including best paper, inventor of the year, and multiple merit-based scholarships. His scientific research focuses on improving information access, especially by applying both machine learning and rule-based methods to information extraction, question answering and news analytics.