Research data management can be seen as a cultural change. The traditional output of scientific work has always been “the paper”. The publication in a journal was the way to make your research count. And it still holds true today. But additional ways to publish your work have emerged. Since science is becoming more data-driven day by day, research data has become a valid product of science, which should be published or at least made available to the community. The guiding idea for RDM @ MATH+ is to provide a single point of entry to all research data which has been dealt with in our projects. We don‘t want to create an additional silo where data has to be stored but have a system which acts a bibliographic evidence or finding aids for your research data.
The slides from the orientation day on 11 October can be found here.
Since all the members in MATH+ are employed at one of the partner institutions their respective policies for handling research data are the first go-to if questions arise. Over the lifetime of MATH+ more best practices will emerge which will then be published on this page.
Here you can find the research data policy of:
Since 01 October 2021, the MATH+ Research Data Management Organiser (RDMO) is online. Some background information and the direct link to the tool can be found in the members area.
We follow the approach of having a guided online interview for all things concerning the management of data in the various projects of our cluster. Filling in this interview results in a DFG compliant Data Managment Plan, which helps you cover all relevant aspects of handling your data. It can be shared with others due the course of your project and act as a sort of How-To for the data, comes the time for publishing or reuse.
This page will also be the place where some best practice examples of Data Management Plans (DMP) will be displayed in the near future (early 2022). These can be either DMPs for project with exceptionally large data sets or different kinds of data, e.g. software. Probably even some of the big data sets are documented here with their accompanying metadata, just to make them more visible, since they usually require specialized IT facilities for storing, staging and processing.