- Collaboration needs to well defined. Questions asking about different discrete interactions should be asked. (Do you email, exchange materials, lab discussion, etc...)
- People are happy to give 10 minutes for an undergraduate's final year project survey :)
- The survey should be in one fell swoop. Breaking it up would be annoying for users.
- All published results needs to be anonymised. People will be reluctant to reply otherwise.
- January 6th, Week 0 of Spring term is the best date to send the survey - no exams to mark, backlog of emails from xmas cleared.
- An incentive may be helpful - especially with regards to getting post-docs, PhDs etc. Whether this is one coffee per person, or an amazon voucher (maybe a raffle, 20p per person that responds?) is to be decided.
- People don't always say that they've "collaborated" with people they've published with. Reinforcing the need for a better descriptor.
- It seems that around 5-10 grant proposals are made per year per PI, with ~30% rejection rate.
- Grants may need to be broken down (research councils, charities, industry, studentships etc.)
- Strong inter-departmental collaborations exist between biology, chemistry and computer science, but there are probably more.
- It will be interesting to look at the flow of data between mac/win/*nix users, and if there's an OS barrier there.
- Seeing how physical distance is correlated between the graphical distance between nodes will be interesting, maybe the further you go, the less edges there are (within the University).
- Some people have data that they would publish, but don't have the time, or don't think it would be published in a prestigious enough journal to be worthwhile.
- People tend to collaborate regularly with 5-10 people outside of the university. How we define or question this is another matter.
- People can easily list people they've collaborated with over the last 2-3 years, but may not know who they've shared data or materials with, as the handling of such requests may be delegated to someone else.
A Conclusion?
There is a tradeoff between the amount of information, and the amount of respondents. We want enough information to be useful, but don't want to ask so many questions that people are put off. Achieving this balance will be tricky. To eliminate bias in the survey, questions will have to be specific enough so we can confidently say that people are interpreting the question correctly, but not so specific to burden the user with the time needed to answer.
No comments:
Post a Comment