Poster Reception
Schedule info
OAI-PMH harvesting metadata and virtual datastream
DigiBESS Repository exposes OAI-PMH interface for external metadata harvesting. Metadata can be disseminated in two formats: OAI_DC and PICO. OAI_DC are extracted from object datastream DC. PICO are generated on-the-fly by an object service. This Poster analyses the process of virtual datastream deployment to supply different metadata.
Attachment | Size |
---|---|
POSTER_OAI-PMH harvesting metadata and virtual datastream_OR2013.pdf | 947.98 KB |
ABSTRACT_OAI-PMH harvesting metadata and virtual datastream_OR2013.docx - updated abstract | 14.76 KB |
Digital Repository Infrastructure: Should you rent or buy?
In 2008, the Alliance Digital Repository launched a consortial, open-source digital repository on locally purchased and managed hardware. After five years and a major software platform migration, our purchased hardware was at the limits of its capacity and much of it was going out of maintenance.
In the first half of 2013, we analyzed whether we should continue to buy hardware and manage it ourselves or switch to an "infrastructure-as-a-service" model. We will explain our decision-making process and provide the rubric we used to evaluate the sustainability of each option, including criteria such as cost, reliability, and scalability. We will also share the solution we ended up selecting (selection deadline is June 2013).
Attachment | Size |
---|---|
OR2013_rent_or_buy.pdf | 97.2 KB |
Bending the rules without breaking the repo: Using free RDF description in Fedora Commons repositories
This poster will present the design and use of an experimental extension module for Fedora Commons that ameliorates some of the limitations in Fedora's abilities to index, store, and manage RDF assertions that compose rich descriptive metadata. It permits the management of RDF assertions drawn from arbitrary XML metadata streams in an object, while keeping the XML metadata as the authoritative and only stored description. It allows for the creation and management of RDF triples the subjects of which are not Fedora resources. And it does these things with nothing more than simple XML configuration and XSLT stylesheets. The software is now publicly available and the poster will be accompanied by a live demonstration.
Attachment | Size |
---|---|
poster.rtf | 5.03 KB |
A mash-up of a Japanese Open Repository and a Researcher CV Platform
The National Institute of Informatics (NII) has led a Japanese institutional repository project since 2014 and, developing homegrown repository software named WEKO as a module for the content management system NetCommons (NC). Recently, NII and Japan Science and Technology Agency started a researcher collaboration platform named Read and Researchmap (R&R) based on the NC framework. One of the major functions of R&R is to serve as an electronic CV system for researchers. In addition to automatic archival of articles, a variety of content such as presentation files and lecture notes are being deposited in R&R. In order to enhance the system’s metadata handling, reuse of code, and interoperability, we decided to implement WEKO as a back-end for the R&R content management system. The system works as a comprehensive self-archiving repository in Japan.
Attachment | Size |
---|---|
PosterManuscriptRRWEKO.pdf | 160.82 KB |
Crowdsourcing HCI for the institutional repository
It is said that one must not judge a book by its cover, but does that extend to researchers and cover sheets? Cover sheets excite enough discussion for technical and policy reasons – impact on metadata, necessity of use, branding , impact on publishers and so forth – to relegate the questions of their usability and efficacity to the bottom of the pile. In this era of cutting costs and trimming budgets, who has the money to spend on detailed investigation of anything that does not immediately impact on the core functions of institutional repositories: encouraging deposits, repository upkeep and so forth? In this paper we demonstrate the use of a crowdsourcing platform to run an extensive between-subjects HCI experiment designed to explore the impact of cover sheets upon common user tasks such as identification of document elements such as publication date and location of publication, and evaluate user perceptions of document layout.
Attachment | Size |
---|---|
articleoncrowdsourcing.pdf | 77.64 KB |
All About DSpaceDirect
On February 22, 2013, the United States White House issued a directive supporting public access to publicly funded research. Specifically, the directive states that each agency that falls under the provisions of the directive must “ensure that the public can read, download, and analyze in digital form final peer reviewed manuscripts or final published documents” that are the direct outputs of funded research.* In order to assist organizations meet this mandate as well as to continue to meet the needs of the academic community, the DuraSpace organization will be launching DSpaceDirect – a quick and cost effective hosted service that allows users to store, organize, and manage DSpace repository content in the cloud. The “All About DSpaceDirect” poster will demonstrate how a hosted DSpace service can be used to preserve and provide access to academic faculty and student papers, projects, and research making content easily searchable by end users and easily managed by content curators.
Attachment | Size |
---|---|
All About DSpaceDirect.pdf | 62.5 KB |
Automatic reproduce metadata from the log of HTTP server
Many academic organizations have been establishing institutional repository (IR) to gather digital publications. Most access of the IR is from search engines, e.g. Google, Bing and Yahoo. On the other hand, the search performance of the repository itself is not well. One of the reasons is that the major search engines crawl PDF files of IRs in order to index keywords in a PDF resource, although most of the repository systems do not handle a full text of PDF resources. It is necessary to enrich metadata of resources of the IR in order to increase the search performance. However there is a limit to register a large amount of metadata manually.
When a user, who wants to search bibliographies or references, clicks a resource in a repository, a query used the search is related to the resource in the user criteria. In other words, the user labels the resource with the search query. The accessed resource and the query is stored in an access log of the HTTP server.
Therefore we propose that a new metadata can be collected from access logs of a HTTP server. After a access log from search engines is picked out, a search query and an accessed resource id is extracted from the log. If the query is not included in a metadata of the id, the query is candidate for new metadata. To verify whether it is suitable to use the search query as new resource metadata, we calculated the concordance rates between the keywords extracted from HTTP log and metadata of the accessed resource.
The result showed that 60.1% of search keywords did not correspond to metadata. Detail analysis also showed that most of not matched keywords (83.7%) are the words that are included in the full text of the resource and the remaining (16.8%) is related to the resource. Therefore, the search performance can be improved by appending new metadata, which used as search query. A new keyword extracted from access log of HTTP server will be submitted to a repository server using SWORD protocol.
Attachment | Size |
---|---|
proposal.pdf | 81.19 KB |
What is P-CUBE?
P-CUBE was developed based on open source recognized all over the world. P-CUBE uses Fedora of DURASPACE in storage structure that USA’s NSF and LC (Library of Congress) sponsor and uses business logic with service and management function of DSPACE. P-CUBE uses MySql as relational database.
Attachment | Size |
---|---|
What is P-CUBE.pdf | 372.33 KB |
Redirecting Web service for ORCID to scholarly systems via the Researcher Name Resolver
We built a researcher identifier management system called the Researcher Name Resolver (RNR) to assist with the name disambiguation of authors in digital libraries on the Web. Since ORCID opened API for accessing its records in the last year, RNR provides a web service that users are navigated to external scholarly systems in Japan with ORCID.
Attachment | Size |
---|---|
or2013-kurakawa-submit.pdf | 193.44 KB |
Challenge to Data-intensive Science: cooperation of metadata database for upper atmospheric research and author ID
Science is changing because of the impact of information technology. Experimental, theoretical, and computational science are all being affected by the data deluge, and a fourth, "data-intensive" science paradigm is emerging.
To investigate the mechanism of long-term variations in the upper atmosphere, we need to
create integrated links between a variety of ground-based observations made at various locations from the equator to the poles because what we observe is the result of complicated processes. However, the Japanese observational databases (e.g., by a global network of radars, magnetome-ters, optical sensors, helioscopes) have been maintained and made available to the community
by each institution that conducted the observations. Then researchers encountered the problem
that is difficult to look for various kinds of observational data to clarify the global scale physical phenomena.
In order to solve the problem, we built the metadata database for upper atmosphere by using extended DSpace^1. The extended point is to handle the IUGONET common metadata format
which include resource types for dataset and human resources^2 instead of Dublin Core. Thereby,
the researchers can reach distributed observational raw data via metadata [1].
From a viewpoint of data publication, International Council for Science (ICSU) - World Data
System (WDS) members are considering to put Digital Object Identier (DOI) to dataset by
using registration agency like DataCite. Then ICSU - CODATA's Data Science Journal is consid-
ering to realize data citation. On the other hand, Open Researcher & Contributor ID (ORCID)
launched its registry service on Oct. 2012. Under these situation, we planned to put the both
IDs into above mentioned metadata to create linkage between raw data and data contributor.
As a first step, we put ORCID ID into the metadata.
[1] Koyama et al., "Metadata Database for Upper Atmosphere by using DSpace", OR2012.
^1
http://search.iugonet.org/iugonet/
^2
http://www.iugonet.org/data/schema/iugonet.xsd
Attachment | Size |
---|---|
or2013.pdf | 21.08 KB |
Client based interface and proxy server for content re-use framework based on OAI-PMH
The author proposes and implements proxy sever for OAI-PMH which translates XML responses into JSON format. Furthermore JavaScript library which translate OAI-PMH responses in JSON format into HTML on client side is developed. By the framework each records on digital repositories are flexibly usable in any web-based systems via GetRecord and ListRecords queries.
Attachment | Size |
---|---|
or13-namiki.pdf | 99.11 KB |
The Czech Digital Library - Fedora Commons based solution for aggregation, reuse and dissemination of a digital content
The goal of the project presented in the paper is to create the Czech Digital Library which will aggregate digital libraries operated by libraries in the Czech Republic. Fedora Commons Repository serves as the cornerstone and all outputs of further development are available under GNU General Public License. The Czech Digital Library will serve both as a common interface for end users and as a primary data provider for international projects. Tools to support complex digitization processes which include processing, workflow monitoring, archiving and dissemination of digital materials owned by a variety of memory institutions, especially libraries, are developed as a part of the project. The solution is built upon the current situation in the Czech environment, outputs from previous digitization and development projects are being used. The Kramerius system, open source Fedora Commons repository based software used for digital data dissemination in all kinds of libraries, academic included, is the core of the solution.
Attachment | Size |
---|---|
Lhotak_CDL_paper_proposal_OR2013.pdf | 207.99 KB |
Collaborative repository to support food and feed safety risk assessment in Europe
The European Food Safety Authority (EFSA) was established to assess risks associated with the food chain. EFSA’s risk assessment work contributes to improving food safety in Europe and to building public confidence in the way risk is assessed. Risk assessment is a specialised field of applied science that involves reviewing scientific data and studies in order to evaluate risks associated with certain hazards. In order to ensure the risk assessment process is robust and transparent EFSA is using systematic review principles for the identification, selection, appraisal and synthesis of scientific studies used as the basis for risk assessments.
Reports, working papers and risk assessments produced by government agencies, competent authorities and other public institutions in the member states are key pieces of evidence for European risk assessments. Frequently these documents are not published through traditional scientific publishing routes and may only be publically available for a brief period of time on organisation websites. These reports generally come from routine control, prevention and monitoring programmes, which can be used to define baselines and provide a deeper understanding of the variability of risk factors between the regions and countries of Europe.
EFSA is developing a repository of food and feed safety related documents. Each document will have metadata compliant with Dublin core/Open Archives Initiative standards and Digital Object Identifier assigned to allow long term retrieval. The repository will feature social networking functions to facilitate sharing of information between risk assessors and scientists, promoting collaborative working. The reuse and recycling of scientific reports by the European risk assessment community should result in targeted and cost effective measures to ensure the security of the Europe’s food chain.
Attachment | Size |
---|---|
AbstractOpenRepositories.doc | 26 KB |
Defiant Objects: managing non-standard deposits in institutional repositories
We present the findings of a project undertaken to provide simple guidance for the deposit and description of non-standard repository objects, for example; art works, exhibitions and composite works, to researchers and repository managers. The project stems from an identified lack within current repository schema to support these non-standard objects and a need to understand how some objects become 'defiant'.
Attachment | Size |
---|---|
Defiant_Objects-OR2013-poster_proposal.doc | 63 KB |
Giving them what they want: Using Data Curation Profiles to guide Datastar development
The Datastar project, which first received NSF funding in 2007, is one of Cornell University Library’s (CUL) efforts in the area of digital data discovery (Steinhart 2010; Dietrich 2010; Khan 2011). Originally envisioned as a “data staging repository,” where researchers could upload data, create minimal metadata, and share data with selected colleagues, Datastar has been reconceived as a data registry to support discovery of datasets. Datastar is now being developed in partnership with Washington University at St. Louis (WUSTL) as a data registry that can be used either as a standalone tool or in conjunction with VIVO, Cornell’s open source semantic web application for research and scholarship networks. These more recent efforts were funded by the U.S. Institute of Museum and Library Services.
With the new focus, we wanted to ensure development decisions were driven by real user needs. To that end, we used the Data Curation Toolkit (http://datacurationprofiles.org/) to conduct interviews and create a set of Data Curation Profiles (DCPs) with participants selected at CU and WUSTL (Witt and others 2009). Designed to elicit information needs associated with a data set or collection, the structured interviews provided a general framework for discussing data with researchers and to create a profile covering the stages of the data lifecycle. Participants from a broad range of disciplines were invited to participate, and eight completed interviews.
After evaluation and prioritization of the findings from the interviews, a set of particularly relevant responses emerged. These highly relevant responses had a direct influence on Datastar development. In contrast, some findings from the DCPs were out of scope for the current iteration of Datastar, but they were still interesting because they helped to provide additional details about the ways researchers prefer to interact with their data. Although not all of the findings will result in functionalities in current or later iterations of Datastar, they will almost certainly inform future development. Using the DCPs allowed us to focus Datastar development efforts on addressing real user needs, saving wasted effort. This presentation will discuss both the results of our interviews and the development efforts that were prioritized as a result of our findings.
Attachment | Size |
---|---|
DatastarPoster_OR2013_Final.docx | 18.68 KB |
An Investigation into Journal Research Data Policies: Lessons from the JoRD Project
The JoRD project was a feasibility study conducted by the Centre for Research Communications to assess the potential and scope of a centralised service that would collate and disseminate information about journal data policies. A survey of existing policies was carried out, stakeholders were consulted and a range of business models were suggested. The study found that the data sharing environment is in a confused state, and that guidance which would aid the use, reuse and reproduction of data would be of benefit to all stakeholders. The presentation will outline the current state of journal data sharing policies, views and practices of stakeholders and their requirements.
Attachment | Size |
---|---|
24 7 proposal for OR2013 (2).docx | 24.74 KB |
Expanding Canada’s Research Data Eco-system: Repositories as enablers in overcoming cultural hurdles
Canada has made some progress towards data management as a result of the Data Liberation Initiative. We have trained data librarians to mark up data in the Data Documentation Initiative (DDI) standard. This has led to the emergence of tools for access such as odesi and equinox. While this looks like a promising start, researcher-generated data under the auspices of granting councils are largely missing in action. Why?
This presentation will focus on the missing pieces in the research data management puzzle and the role of repositories in completing the picture.
Attachment | Size |
---|---|
Canada's Research Data Eco-System.pdf | 116.79 KB |
Customizing STEM Instruction with Educational Digital Libraries
The Curriculum Customization Service (CCS) is a web-based tool that allows middle and high school teachers to customize instruction in Science, Technology, Engineering, and Mathematics (STEM) by incorporating interactive digital teaching and learning resources drawn from educational digital libraries into their lesson plans and student learning activities. The CCS is currently being used in six school districts in Colorado, Nevada, and Utah, and it supports drawing resources from both open repositories and publisher materials. The CCS also allows teachers to share the materials that they create and the resources that they select with other teachers within their school district.
Attachment | Size |
---|---|
CCS OR2013 poster-Krafft.docx | 15.25 KB |
Link it or don't use it: transitioning metadata to linked data in Hydra
The University of Oregon Libraries and Oregon State University Libraries have been successfully collaborating on a digital asset management system for four years, OregonDigital.org. OregonDigital.org holds diverse collections of digital archival materials from art slides to faculty research to traditional digitized and born-digital archival collections. In addition, we host unique collections from local government, historical societies, and museums. In preparation for a migration to a new platform, based on Hydra, we are re-evaluating the metadata schemas used in these collections and transitioning to an open interoperable framework. A key element in the transition to Hydra is a major metadata transformation from locally customized Qualified Dublin Core, VRA Core 4.0, and MODS to Linked Data vocabularies with formal specifications using RDFS, SKOS, and OWL.
This poster presents our use of the property hierarchies included in RDF vocabularies to build interoperability into our Hydra metadata. Our implementation includes automated cross-schema indexing and intelligent display and navigation of properties unknown to the software. This large undertaking involves re-thinking metadata schemas created over fifteen years by multiple institutions, as well as incorporating new elements, such as those from the DataCite schema. By utilizing existing linked data vocabularies and creating linked properties for additional elements, we are attempting one schema across content types and collections, yet is flexible enough to continue to evolve with our needs.
Attachment | Size |
---|---|
or2013_poster_proposal_estlund_johnson.pdf | 49.1 KB |
The Repository as Data (Re) User: Hand Curating for Replication
This poster describes the tools and workflow used by the ISPS Data Archive to enhance the usability and usefulness of its research data. Specifically, it explains how replication of published results drives the curation work undertaken at the ISPS Data Archive and offers researchers a re-use test case.
Attachment | Size |
---|---|
OR2013_proposal_Peer.pdf | 179.07 KB |
More than seeing what sticks: Aligning repository assessment with institutional priorities
This poster communicates the development of an assessment program for a newly launched institutional repository. Coinciding with the launch of the Provost's development of his strategic initiatives, Digital Repository @ Iowa State University needed to identify metrics that were aligned with the institutional mission and demonstrated a positive impact on the visibility of the university's research and scholarship.
Attachment | Size |
---|---|
Inefuku-OR2013proposal.pdf | 80.48 KB |
Inefuku-OpenRepositories2013.pdf | 71.13 KB |
Phase One of the Comprehensive Extensible Data Documentation and Access Repository
The goal of this project is improve the documentation of US federal statistical system data by making it more discoverable, accessible and understandable for scientific research. The project has been named CED2AR (Comprehensive Extensible Data Documentation and Access Repository). The objective for CED2AR is to provide a facility to make standardized metadata from heterogeneous sources searchable through an online interface designed to be intuitive to researchers. Phase One, a subset of the overall CED2AR project, is to develop the search API and the web interface for user searches. This presentation of Phase One covers the following components: project background, deliverables, technical feasibility, system requirements, system and program design, user interface design, and user testing. Some these components are illustrated in this proposal.
Attachment | Size |
---|---|
OR20130095 Paper Proposal.pdf | 154.36 KB |
An Overview of the U of A Health Research Data Repository (HRDR): Development, Current status, and Future Steps
The Health Research Data Repository (HRDR), located within the Faculty of Nursing, University of Alberta, Canada, entered its operational phase in January 2013. The HRDR employs secure remote access for its approved users and is a secure and confidential environment for supporting health related research projects and the management of their data/metadata. Additionally, the HRDR has a mandate to promote educational opportunities regarding research data management best practices. One of the initial projects underway within the HRDR is collaboration with Metadata Technologies North America (MTNA) and Nooro Online Research in developing a data infrastructure platform for supporting a Longitudinal Monitoring System (LMS) using data collected within the Translating Research in Elder Care (TREC) project (http://www.trecresearch.ca). Specifically, the LMS data infrastructure platform uses DDI based metadata to support the collection/ingestion, harmonization, and merging of TREC data, as well the timely delivery of reports/outputs based on these data. Development of the HRDR, as well as a current overview of its status and projects will be discussed. Specific focus will be placed upon the development, current status, and forward work relating to the TREC Longitudinal Monitoring System project.
Attachment | Size |
---|---|
Open_Repositories_Conference_2013_abstract_proposal_jd.docx | 37.09 KB |
Holistically Preserving and Presenting Complex Research Data
Digital repositories are being continually challenged by research projects and their associated
complex resources. In the research data environment the role of the digital repository is multi-faceted;
storage, preservation, security, and high-availability of resources and complete research data project
access are expectations of researchers.
This paper will discuss how the RUcore, Rutgers University Community Repository, is
addressing some of these challenges by holistically storing and managing individual resources that have
complex file structures.
Attachment | Size |
---|---|
Holistically Managing _1__rev.pdf | 40.84 KB |
Addressing Impediments to Reuse – The Open Folklore Portal
Attachment | Size |
---|---|
OR13 - Addressing Impediments to Reuse – The Open Folklore Portal.pdf | 105.95 KB |
Embed Audio and Video from Kaltura Streaming Server into Lume
This poster presents the ongoing project of Federal University of Rio Grande do Sul (UFRGS - Universidade Federal do Rio Grande do Sul) to provide audio and video streaming embedded in its digital repository Lume, which uses DSpace. The Kaltura Community Edition was chosen because, as an open source project, it offers the advantages of a self hosted video platform that is customizable to attend the university specific needs, with no additional cost to the institution. Kaltura supports the upload of several video formats, as well as custom-made transcoding options, allowing the end user to play the video in optimized conditions, in several formats and platforms. Currently the Lume and Kaltura integration is made manually, but we intend to automate the video uploads to Kaltura from Lume, with seamless availability in Lume of embedded links from videos stored in the Kaltura server.
Attachment | Size |
---|---|
Poster_OR2013_Kaltura_menor2.pdf | 776.66 KB |
OPENAIREPLUS: Supporting Repository Interoperability through Guidelines
Supporting the open access policy of the European Commission, OpenAIRE www.openaire.eu) is moving from a publication infrastructure to a more comprehensive infrastructure that covers all types of scientific output, funded by the European Commission, and widening to other European funding streams. It harvests content from a range of European repositories, and ensures raised visibility of valuable open access content, as well as links to project and funding information. In order to ensure interoperability from these research infrastructures, a common approach is needed to adhere to existing and future guidelines.
In this context, an integrated suite of guidelines have been developed. The poster will briefly outline the OpenAIRE Guidelines: Guidelines for Data Archive Managers, for Literature Repository Managers and for CRIS Managers.
By implementing all three sets of the OpenAIRE Guidelines, repository managers will be able to enable authors who deposit publications in their repository to fulfill the EC Open Access requirements, as well as the requirements of other (national or international) funders with whom OpenAIRE cooperates.
In addition it will allow the OpenAIRE infrastructure to add value‐added services such as discoverability and linking, and creation of enhanced publications. In short, building the stepping‐stones for a linked data infrastructure for research.
Attachment | Size |
---|---|
Poster proposal for OR2013_Guidelines.pdf | 96.96 KB |
ZENODO ‐ A new innovative service for sharing all research outputs
This poster will present ZENODO, a new simple and innovative service that enables researchers, scientists, EU projects and institutions to share and showcase multidisciplinary research results (data and publications) that are not part of the existing institutional or subject‐based repositories of the research communities.
ZENODO enables researchers, scientists, EU projects and institutions to:
- Easily share the long tail of small research results in a wide variety of formats including text, spreadsheets, audio, video, and images across all fields of science.
- Display and curate their research results and get credited by making the research results citable and integrate them into existing reporting lines to funding agencies like the
European Commission. - Easily access and reuse shared research results.
The poster will highlight use‐cases of ZENODO by The John G. Wolbach Library at Harvard‐Smithsonian Center for Astrophysics as well as the European Middleware Initiative (EU FP7 project). ZENODO is developed under the EU FP7 project OpenAIREplus (grant agreement no. 283595) by CERN based on Invenio.
Attachment | Size |
---|---|
Poster proposal for OR2013_Zenodo.pdf | 93.33 KB |
- Login to post comments