Already a member?

Sign In

Conference Presentations 2013

  • IASSIST 2013-Data Innovation: Increasing Accessibility, Visibility, and Sustainability, Cologne, Germany
    Host Institution: GESIS – Leibniz Institute for the Social Sciences

E2: Making Complex Confidential Microdata Useable (Thu, 2013-05-30)
Chair:Jennifer Darragh

  • Generating Useful Test Data for Complex Linked Employer-employee Datasets
    Peter Jacobebbinghaus (German Data Service Center for Business and Organizational Data (DSZ-BO))

    [abstract]

    When data access for researchers is provided via remote execution or on-site use, it can be beneficial for data users, if test datasets that mimic the structure of the original data are disseminated in advance. With these test data researchers can develop their analysis code and avoid delays due to otherwise likely syntax errors. It is not the aim of test data to provide any meaningful results or to preserve statistical inferences. Instead, it is important to maintain the structure of the data in a way that any code that is developed with these test data will also run on the original data without further modifications. Achieving this goal can be challenging and costly for complex datasets such as linked employer-employee datasets (LEED) as the links between the establishments and the employees also need to be maintained. We illustrate how useful test data can be develpoed for complex datasets in a straightforward manner at limited costs. Our apporach mainly relies on traditional statistical disclosure control (SDC) techniques such as data swapping and noise addition. The structure of the data is maintained by adding constraints on the swapping procedure.

E3: Case Studies: Maximizing Usage of Important Datasets (Thu, 2013-05-30)
Chair:Barry Radler

  • Development of the Heath Research Data Repository (HRDR) and the Translating Research in Elder Care (TREC) Longitudinal Monitoring System (LMS)
    James Doiron (University of Alberta)
    Pascal Heus (Metadata Technologies North America)

    [abstract]

    The Health Research Data Repository (HRDR), located within the Faculty of Nursing, University of Alberta, Canada, entered its operational phase in January 2013. The HRDR employs secure remote access for its approved useres and is a secure and confidential environment for supporting health related research projects and the management of their data/metadata. Additionally, the HRDR has a mandate to promote educational opportunities regarding research data management best practices. One of the initial projects underway within the HRDR is collaboration with Metadata Technologies North America (MTNA) and Nooro Online Research in developing a data infrastructure platform for supporting a Longitudinal Monitoring System (LMS) using data collected within the Translating Research in Elder Care (TREC) project (http://www.trecresearch.ca). Specifically, the LMS data infrastructure plaform uses DDI based metadata to support the collection/ingestion, harmonization, and merging of TREC data, as well the timely delivery of reports/outputs based on these data. Development of the HRDR, as well as a current overview of its status and projects will be discussed. Specific focus will be placed upon the development, current status, and forward work relating to the TREC Longitudinal Monitoring System project.

    Presentation:
  • From 1911 to 203: Renewing UK Birth Cohort Studies Metadata
    John Johnson (University of London)
    Jack Kneeshaw (UK Data Archive)

    [abstract]

    CLOSER (Cohorts and Longitudinal Studies Enhancement Resources) is a five-year program which aims to maximize the use, value and impact of these studies both within the UK and abroad. The program is run by a network of nine of the UK's leading studies (8 cohorts and one panel study), with participants born between 1911 and 2007. A major strand will be documenting these surveys and data (over 250 survey instruments and around 250,000 data variables) in DDI-L. The surveys cover a wide range of survey collection methods, paper questionnaires to CAI, biomedical and administrative linked data. The presentation will cover the workflow and systems used to capture paper questionnaires, archived documents, available DDI 2.0 and other electronic metadata from the survey and metdata captured from data into DDI-L. This includes an in-house application for questionnaire capture, written in Ruby on Rails, Python-based data quality tools to interface with SPSS and Colectica for overall data management and its co-ordination across the 8 studies. The presentation will also highlight how some recent changes to the DDI specification will assist in the management of these projects.

E4: Case Studies in Research Data Management (Thu, 2013-05-30)
Chair:Maria A. Jankowska

  • Erasmus University Rotterdam's approach to supporting researchers with data management and storage
    Paul J. Plaatsman (Erasmus University Rotterdam)

    [abstract]

    As in other Dutch academic institutions we talked a lot about research data. With cases about fraud within our university and other universities in the Netherlands, policy makers and university boards became more demanding on the university libraries helping out with better storage of research data and educating PhD and young researchers about proper ways of handling their research data. So from talks we had to get into action. We are presently doing so by offering an information course about research data within our already existing Research Matters portal. We also want to offer our researchers a safe environment to store their research data for the mid-long term, 5 to 10 years, and think about solutions for the dataset that needs to be stored indefinitely in the national data archive: DANS. We do a pilot with three types of dataset: experimental, survey and qualitative from researchers of the Erasmus Research Institute of Management (ERIM) in the Dutch Dataverse Network, hosted by the University of Utrecht. Dataverse Network facility is now being used by four Dutch universities.

    Presentation:
  • RDM Roadmap@Edinburgh - An Institutional Approach
    Stuart Macdonald (University of Edinburgh)
    Robin Rice (University of Edinburgh)

    [abstract]

    The first institutional Research Data Management (RDM) policy by a UK Higher Education Institution was passed by the Senate of the University of Edinburgh in May 2011. This paper discusses plans to implement this policy by developing services needed to support researchers and fulfill the University's obligations within a changing national and international setting. Significant capital funding have been committed to a major RDM and storage initiative led by Information Services (IS) for the academic year 2012-13. An RDM steering group, made up of academic representatives from the three colleges and IS, has been established to ensure that proposed services meet the needs of university researchers. It also overees the activity of an IS cross-divisional RDM Policy Implementation Committee, charged with delivering those policy objectives. An RDM Roadmap (http://www.ed.ac.uk/polopoly_fs/1.101223!/fileManager/UoE-RDM-Roadmap201121102.pdf) was published in November 2012 to provide a high level overview of the work to be carriedout. The roadmap focuses on four strategic areas: data management support, data management planning, active data infrastructure and data stewardship. IS will take requirements from research groups and IT professionals, and are conducting pilot work involving volunteer research units within the three colleges to develop functionality and presentation for the key services.

    Presentation:
  • Dataverse Network and Open Journal Systems Project to Encourage Data Sharing and Citation in Academic Journals
    Eleni Castro (Institute for Quantitative Social Science (IQSS) Harvard University)

    [abstract]

    As data sharing technology and data management practices have developed over the past decade, academic journals have come under pressure to disseminate the data associated with published articles. Harvard University's Institute for Quantitative Social Science (IQSS) recently received a two year grant from The Alfred P. Sloan Foundation to partner with Stanford University's Public Knowledge Project (PKP) in order to help make data sharing and preservation an intrinsic part of the scholarly publication process, and create awareness specifically among journal editors and publishers. This presentation will provide an overview of the collaboration between PKP's Open Journal Systems (OJS), and IQSS's Dataverse Network (DVN) team who are currently working on building the needed technology that will support seamless publication of research data and articles together; and to support new forms of social science data, readership and analysis. The immediate impact of the project will be to increase the number of readily replicable articles published, and the number of social science journals that adopt best data management and citation practices. The broadest impacts of the project will be to increase the pace of discovery in the social sciences, and to broaden the research opportunities for younger scholars.

    Presentation:
  • Promoting data accessibility, visibility and sustainability in the UK: the Jisc Managing Research Data Programme
    Laura Molloy (University of Glasgow)
    Simon Hodson (Jisc)

    [abstract]

    Driven by new research objectives and opportunities requiring the interdisciplinary reuse of data as well as research funder and (increasingly) journal policies, the case for skills in research data management (RDM) is becoming clearer to researchers of all disciplines. Some disciplines are historically well-served by national data centers and perpetuate a culture of organized data deposit, management, sharing and re-use. Many other researchers, however, work in disciplines without this heritage or produce data which is not appropriate for data center hosting. Institutions face a concomitant rise in responsibility for the formulation and delivery of appropriate and accessible RDM services and infrastructure for their researchers. Across the UK, the Jisc Managing Research Data program is stimulating improved RDM practice across disciplines and staff groups via development of tailored policy, services, technical infrastructure and training. Our paper will describe the work of the program and complementary work by the Digital Curation Centre. We shall discuss emerging models in institutional approaches which may be of use elsewhere. Above all, we shall examine how data management planning and training activities may be enhanced by a consideration of disciplinary differences and suggest the benefits of drawing on expert partners beyond the institution.

E5: Never Say Never: Working with Seemingly Disparate Data (Thu, 2013-05-30)
Chair:Bobray Bordelon

  • Towards making African longitudinal population-based demographic and health data sharable: Data Documentation practices in the past, present and future
    Chifundo Kanjala (London School of Hygiene and Tropical Medicine, ALPHA Network)

    [abstract]

    African longitudinal population-based studies have been collecting demographic, socioeconomic and health data for, on average, over a decade. Efforts are currently being made to make these data more sharable. This current study assesses the extent of the implementation of structured data documentation using the Data Documentation Initiative (DDI) and other related specifications/ standards. This is done by describing efforts that are currently underway among members of the two main networks uniting these studies. These networks are the INDEPTH (International Network for the continuous Demographic Evaluation of Populations and their Health) and the ALPHA (African longitudinal population-based studies) networks.

    Presentation:
  • Metadata for Complex Information
    Lisa Neidert (University of Michigan)

    [abstract]

    Researchers create analysis files that are not always based on numeric data. An example would be a database of abortion laws by state and time. US states vary in their abortion regulations: age limits, ultrasound requirements, mandatory waiting periods, etc. Typically, the cornucopia of regulations have been added, modified, deleted over time. And, to complicate matters, sometimes there are jurisdictional variations which may or may not stay constant. Another database would be state-based legislation centered on specific topics and sub-Track, which allows for comparisons over time or across states. A final example that incorporates both data and information would be the yes/no county breakdowns for citizen votes for amendments to state constitutions on a variety of topics. Important information the researcher might collect would be the text of the statute, language on the ballot, source for the vote, legislative vs. citizen-based, year, type of election. Are data documentation initiatives flexible enough to import this type of information? Clearly, the information is structured if it is disseminated via a searchable database. Should social science archives care about preserving these efforts? Would these be considered data under NSF/NIH data sharing legislation?

    Presentation:
  • Distributed archiving of social science research data: On the way to best-practice guidelines
    Reiner Mauer (GESIS - Leibniz Institute for the Social Sciences)
    Oliver Watteler (GESIS - Leibniz Institute for the Social Sciences)

    [abstract]

    Distributed archiving is a common topic for most institutions taking care of research data. Organizational and technical solutions are available. But intellectual input is still necessary to keep creation contexts coherent for third parties. Institutions with similar research interests may hold similar data. The Council of European Social Science Data Archives (CESSDA) is one example for an international collaboration. The German Data Forum takes care of a national data infrastructure. The connection of metadata is the key to access distributed data sources. Social science data archives commenced work on an XML standard for this purpose in the mid-1990s, the Data Documentation Initiative (DDI). Digital Object and other persistent identifiers (DOI, URN, etc.) facilitate the technical linkage of objects in various locations. But how do you connect qualitative and quantitative data from the same project which are archived in different locations? Where do you best document an international project context if data-sets are preserved in national archives? How does a scholar learn about the variations in data holdings when a full version is accessible through a Research Data Centre and a reduced version is publicly available? These are some of the intellectual challenges for planning distributed services. Answers to these challenges are necessary to assure the cohesion of creation contexts. Best practice guidelines are needed.

  • IASSIST Quarterly

    Publications Special issue: A pioneer data librarian
    Welcome to the special volume of the IASSIST Quarterly (IQ (37):1-4, 2013). This special issue started as exchange of ideas between Libbie Stephenson and Margaret Adams to collect

    more...

  • Resources

    Resources

    A space for IASSIST members to share professional resources useful to them in their daily work. Also the IASSIST Jobs Repository for an archive of data-related position descriptions. more...

  • community

    • LinkedIn
    • Facebook
    • Twitter

    Find out what IASSISTers are doing in the field and explore other avenues of presentation, communication and discussion via social networking and related online social spaces. more...