Already a member?

Sign In

Conference Presentations 2011

  • IASSIST 2011-Data Science Professionals: A Global Community of Sharing, Vancouver, BC
    Host Institution: Simon Fraser University and the University of British Columbia

C1: Recent Developments in the DDI Implementation Landscape II (Wed, 2011-06-01)
Chair:Arofan Gregory, Open Data Foundation

  • QDDS - combining questionnaire development and survey documentation
    Oliver Hopt (GESIS - Leibniz Institute for Social Sciences)
    Brigitte Mathiak (GESIS - Leibniz Institute for Social Sciences)

    [abstract]

    QDDS is a general approach to create and design questionnaires and to document the changes and design decisions made in the process. This documentation is especially important for panels or other surveys that re-use instruments, as it allows the users to participate from the decisions already made and existing questionnaire elements. The alternative, to document the changes manually, is labor intensive and complex. Also, the goal is to have a multiple-purpose questionnaire editor that can be used to design surveys regardless of the target distribution system. Currently, we are using the DDI 2.1 standard as general file format and work on a switch to DDI 3.1. Using these standards makes it possible to import questionnaires into structured survey documentations without any loss of information. In this paper, we describe how we manage to hide the complexity of DDI by focusing on the main entities. We also describe the architecture and how it leads to the easy-to-use interface that allows primary researchers to use this tool efficiently, without in-depth documentation skills. Furthermore, we sketch out the necessary changes that are necessary to switch to DDI 3.1 and how that will improve the interaction with future information systems based on DDI 3.1. In the presentation on the last IASSIST we showed mainly the overall concept and reason for the support of DDI 3. For this IASSIST we want to show a first view on the actual solution and the corresponding changes on user interface level that are necessary.

    Presentation:
  • Colectica Demonstration
    Jeremy Iverson (Algenta Technologies)
    Dan Smith (Algenta Technologies )

    [abstract]

    Colectica is a DDI 3-based platform for creating, documenting, managing, distributing, and discovering data. Colectica aims to create publishable documentation as a by-product of the data management process. This demonstration will focus on features that have been added to Colectica over the past year, including: - Metadata repository for multi-user collaboration - Workflow management - Automated metadata harmonization - Improved Web-based data discovery and dissemination.

    Presentation:
  • Keeping up with Questasy
    Alerk Amin (CentERdata)

    [abstract]

    Questasy is a data dissemination website application based on DDI 3. It manages studies, questions, variables, publications and more for longitudinal panel surveys. It was primarily developed for the LISS Data Archive (http://www.lissdata.nl), but is freely available for other organizations. This presentation will relate the new developments in Questasy since the 2010 IASSIST.

    Presentation:

C2: DataCite - Making Data Citable (Wed, 2011-06-01)
Chair:Brigitte Hausstein, GESIS - Leibniz Institute for Social Sciences

  • How DataCite and CODATA support data citation
    Jan Brase (DataCite)
    Presentation:
  • DataCite Canada: Canada’s Data Registration Centre
    Karen Morgenroth (Canada Institute for Scientific and Technical Information - CISTI)
    Presentation:
  • UC3 Services that Support the Data Life Cycle - the Centrality of Persistent Identifiers
    Patricia Cruse (University of California Curation Center, California Digital Library)
    Presentation:
  • da|ra - The German Registration Portal for Social- and Economic Data
    Brigitte Hausstein (GESIS - Leibniz Institute for Social Sciences)
    Anja Wilde (GESIS - Leibniz Institute for Social Sciences)
    Wolfgang Zenk-Moltgen (GESIS - Leibniz Institute for Social Sciences)

    [abstract]

    In the interest of good scientific practice there is a demand to make collected primary data publicly accessible, so that one sees not only the final, conclusive research results, but the entire research process can be followed as well. This is why the GESIS–Leibniz-Institute for Social Sciences (GESIS) and the ZBW – Leibniz Information Centre for Economics (ZBW) decided to implement the DOI registration portal for German social and economic data – da|ra. This infrastructure will lay the foundations for a permanent identification, storage, localizing and ultimately a reliable citability of research data. A DOI is a name that permanently identifies an object (such as research data) in the digital environment. The DOI name won’t change, even though the information about the data, including where to find it, may change from time to time. In cooperation with DataCite, the international initiative to establish easier access to digital research data, GESIS is already operating the non-commercial registration agency for social data. The service was initiated in 2010 with a pilot phase in order to set up the technical and organizational framework. Meanwhile 4780 studies are registered with DOI names. From 2011 onwards the registration service will be expanded to economic data. The paper will deal with the technical and organizational solutions of the DOI Name Registration Portal and focus on how the new features are included in the already existing DOI service.

    http://www.gesis.org/dara
    http://www.zbw.eu

C3: Trusted and Valued: Data Quality Issues (Wed, 2011-06-01)
Chair:Karsten Boye Rasmussen, University of Southern Denmark

  • Recruitment, Participation, and Sampling: Researchers' Results in General Practice
    Thomas Lindsay (University of Minnesota)
    Andrew M. Sell (University of Minnesota)

    [abstract]

    Theoretical works on survey methodology lay out best practices and expected outcomes for researchers who wish to conduct social science research projects. While national surveys and other large-scale projects have the resources to ensure best practice, most social science researchers face compromises relating to cost, time, and availability of respondents. Over the past five years Survey Services at the University of Minnesota's College of Liberal Arts has conducted surveys for approximately 200 research projects using a variety of methods, with divergent outcomes. Using the metadata from these research projects, we have begun to test a variety of theoretical tenets of survey methodology against the empirical outcomes of the projects we have supported. Additionally, we are working with some of our researchers to experimentally test specific approaches to sampling and recruitment. In our presentation we will compare outcomes of various methodological choices made by our researchers. We will also discuss the results of our experimental tests within the framework of theoretical best practices and expectations.

    Presentation:
  • Improving Data Quality by Data Reviews and Tagging: First Pilot Experiences
    Rutger Kramer (Data Archiving and Networked Services (DANS))
    Marion Wittenberg (Data Archiving and Networked Services (DANS))
    Marjan Grootveld (Data Archiving and Networked Services (DANS))

    [abstract]

    Part of our data archive is open for self-archiving, which requires that researchers who deposit their data add metadata themselves. In line with this practice and current trends such as Web 2.0. we are interested in enriching our current metadata with user reviews. Moreover, whereas research quality is also measured by the number and quality of publications, it makes sense to value the quality of the underlying data and the degree to which they are fit for re-use. Therefore we recently carried out a pilot study among researchers who downloaded data sets from our archive. We asked them to review the downloaded data set on several dimensions and also to provide keywords "tags" that should help other researchers to find this data set. Next, we have presented both the tags and the averaged review scores as part of that set’s metadata. In this presentation we will give an impression of the reviews and the user tags and how these tags relate to professional keywords.

    Presentation:
  • Data Quality - Shaken and Stirred
    Karsten Boye Rasmussen (University of Southern Denmark)

    [abstract]

    Sometimes the platform for conference sessions is a scene changing underneath the presenters. Because our "data quality"-session had a cancellation the session will now include a non-advertised presentation by the session chair. The presentation will show some of the common dimensions and reasons behind the demand for data quality. However, because of the short notice for this presentation it will move in the direction of a short stand-up-sing-and-dance-show with audience participation. The subject for this ending presentation of the session will be an attack - hopefully a surprise attack - on the rationality that we expect from ourselves and other requestors of data. Especially when we repeatedly are demanding a higher data quality we have to ask the question: "Are we worth it?".

    Presentation:
  • IASSIST Quarterly

    Publications Special issue: A pioneer data librarian
    Welcome to the special volume of the IASSIST Quarterly (IQ (37):1-4, 2013). This special issue started as exchange of ideas between Libbie Stephenson and Margaret Adams to collect

    more...

  • Resources

    Resources

    A space for IASSIST members to share professional resources useful to them in their daily work. Also the IASSIST Jobs Repository for an archive of data-related position descriptions. more...

  • community

    • LinkedIn
    • Facebook
    • Twitter

    Find out what IASSISTers are doing in the field and explore other avenues of presentation, communication and discussion via social networking and related online social spaces. more...