Difference between revisions of "Judging/Rubric categories"
Line 25: | Line 25: | ||
<!-- start of content ----------------------------------------> | <!-- start of content ----------------------------------------> | ||
+ | <h1> <a id="Rubric"> </a>iGEM Judging Rubric </h1> | ||
+ | |||
+ | <p> | ||
+ | Judging is a complex task and can seem mysterious to iGEMers at times. We're aiming to help teams understand how they are evaluated and provide more information ahead of time. While the individual decisions judges make about teams must remain confidential until after the Jamboree, the systems they use do not. | ||
+ | </p> | ||
+ | |||
+ | <p> | ||
+ | The main mechanism through which iGEM teams are evaluated is called the rubric. The rubric is composed of three main sections: | ||
+ | </p> | ||
+ | |||
+ | <ol> | ||
+ | <li>Medals Section</li> | ||
+ | <li>Project Section</li> | ||
+ | <li>Special Awards Section</li> | ||
+ | </ol> | ||
+ | |||
+ | <p> | ||
+ | Each section is called a category. Within each category, there are between two to 8 questions that we call aspects. Each aspect has 6 language choices that covers a range of how the judge evaluating should feel about the quality of the work in each aspect. These language choices will not be shown as want iGEMers to know how they are being evaluated, but we don't want to "<a href="https://en.wikipedia.org/wiki/Teaching_to_the_test">teach to the test</a>. The language choices correspond to roughly: | ||
+ | </p> | ||
+ | |||
+ | <ol> | ||
+ | <li>Amazing! </li> | ||
+ | <li>Great </li> | ||
+ | <li>Good </li> | ||
+ | <li>Present </li> | ||
+ | <li>Bad </li> | ||
+ | <li>Absent </li> | ||
+ | </ol> | ||
+ | |||
+ | <p> | ||
+ | Each section has a separate function and correlates with different awards. The medals section obviously refers to a team's work convincing the judges they have achieved each medal criteria. These criteria can be found on the <a href="https://2015.igem.org/Judging/Medals"> Medals Page</a> and won't be reiterated here. | ||
+ | </p> | ||
+ | |||
+ | <p> | ||
+ | The Project category is composed of two sub-sections: the main project category and the track specific category. The main project category has eight aspects while the track-specific category only has two. Combined, these ten aspects determine the scores for the teams who will win their tracks and the finalist teams. This category is arguably the most important part of the evaluation for an iGEM team. | ||
+ | </p> | ||
+ | |||
+ | <p> | ||
+ | The final section of the judging rubric determines special awards. Each award has it's own category in the rubric with either four or five aspects. This part of the evaluation integrates with the <a href="https://2015.igem.org/Wiki_How-To/Standard_Pages">standard page system</a> we have build. To be eligible for an award, teams need to complete the corresponding page on the wiki and fill out a 150 word description on the <a href="https://igem.org/Team_Judging">judging form</a>. | ||
+ | </p> | ||
+ | |||
+ | <p> | ||
+ | This evaluation rubric is the result of more than four years of development, hundreds of hours of discussion, dozens and dozens of meetings and thousands of emails between some of the most experienced PI's in iGEM. We are continuously improving and tweaking the rubric, but the system we have is extremely effective at selecting for the best teams that represent the values of iGEM. | ||
+ | </p> | ||
<h2> <a id="Project"> </a> Project </h2> | <h2> <a id="Project"> </a> Project </h2> | ||
Line 126: | Line 170: | ||
<p> | <p> | ||
<ol> | <ol> | ||
− | <li></li> | + | <li>(Aspects will be posted soon) </li> |
− | <li></li> | + | <li>(Aspects will be posted soon) </li> |
− | <li> </li> | + | <li>(Aspects will be posted soon) </li> |
− | <li> </li> | + | <li>(Aspects will be posted soon) </li> |
− | <li> </li> | + | <li>(Aspects will be posted soon) </li> |
</ol> | </ol> | ||
</p> | </p> |
Revision as of 15:45, 26 June 2015
iGEM Judging Rubric
Judging is a complex task and can seem mysterious to iGEMers at times. We're aiming to help teams understand how they are evaluated and provide more information ahead of time. While the individual decisions judges make about teams must remain confidential until after the Jamboree, the systems they use do not.
The main mechanism through which iGEM teams are evaluated is called the rubric. The rubric is composed of three main sections:
- Medals Section
- Project Section
- Special Awards Section
Each section is called a category. Within each category, there are between two to 8 questions that we call aspects. Each aspect has 6 language choices that covers a range of how the judge evaluating should feel about the quality of the work in each aspect. These language choices will not be shown as want iGEMers to know how they are being evaluated, but we don't want to "teach to the test. The language choices correspond to roughly:
- Amazing!
- Great
- Good
- Present
- Bad
- Absent
Each section has a separate function and correlates with different awards. The medals section obviously refers to a team's work convincing the judges they have achieved each medal criteria. These criteria can be found on the Medals Page and won't be reiterated here.
The Project category is composed of two sub-sections: the main project category and the track specific category. The main project category has eight aspects while the track-specific category only has two. Combined, these ten aspects determine the scores for the teams who will win their tracks and the finalist teams. This category is arguably the most important part of the evaluation for an iGEM team.
The final section of the judging rubric determines special awards. Each award has it's own category in the rubric with either four or five aspects. This part of the evaluation integrates with the standard page system we have build. To be eligible for an award, teams need to complete the corresponding page on the wiki and fill out a 150 word description on the judging form.
This evaluation rubric is the result of more than four years of development, hundreds of hours of discussion, dozens and dozens of meetings and thousands of emails between some of the most experienced PI's in iGEM. We are continuously improving and tweaking the rubric, but the system we have is extremely effective at selecting for the best teams that represent the values of iGEM.
Project
- How impressive is this project?
- How creative or novel is the teams project?
- Did the project work?
- How much did the team accomplish?
- Is the project likely to have an impact?
- How well are engineering and design principles used?
- How thoughtful and thorough was the team's consideration of human practices?
- How complete is the team's effort to attribute work?
Track Specific
Track Specific - Standard Tracks
- Did the team design a project based on synthetic biology and standard parts?
- Are the parts functions and behaviors well-documented in the Registry?
Track Specific - Art & Design
- How compelling was the project installation in the art & design exhibition space?
- How well did the project address potential applications or implications of synthetic biology?
Track Specific - Community Labs
- Did the team design a project based on synthetic biology?
- Did the team interact with another iGEM team either through a collaboration or a mentoring relationship?
Track Specific - Hardware
- Did the team demonstrate utility and functionality in their hardware prototype?
- Is the documentation of the hardware system (design files, bill of materials, assembly instructions and/or software) sufficient to enable reproduction by other teams?
Track Specific - High School
- Did the team design a project based on synthetic biology and standard parts?
- Did the team interact with another iGEM team either through a collaboration or a mentoring relationship?
Track Specific - Measurement
- Is the team's measurement protocol likely to be of use to the synthetic biology community?
- Is the protocol well-documented, including the parts functions and behaviors in the registry?
Track Specific - Software
- How useful is the software to the synthetic biology community?
- Is the software designed to be extended and modified by other developers?
Special Prizes
Wiki
- Do I understand what the team accomplished?
- Is the wiki attractive and easy to navigate?
- Does the team provide convincing evidence to support their conclusions?
- How complete is the team's effort to attribute work?
- Will the wiki be a compelling record of the team's project for future teams?
Poster
- Clarity: Could you follow the poster flow?
- How professional is the graphic design in terms of layout and composition?
- Did you find the poster appealing?
- How complete is the team's effort to attribute work?
- How competent were the team at answering questions?
Integrated Human Practices
- (Aspects will be posted soon)
- (Aspects will be posted soon)
- (Aspects will be posted soon)
- (Aspects will be posted soon)
- (Aspects will be posted soon)
Education and Public Engagement
- Did the team demonstrate an innovative educational synthetic biology tool/activity?
- Was a dialogue about synthetic biology established between the team and the public?
- How much did the team accomplish through their efforts?
- Is the tool/activity reusable by other teams, educators, and engagers?
- Did the team learn from the interaction with the public?
Model
- How impressive is the mathematical modeling?
- Did the model help the team understand their device?
- Did the team use measurements of the device to develop the model?
- Does the modeling approach provide a good example for others?
Innovation in Measurement
- Is the measurement potentially repeatable?
- Is the protocol well described?
- Are there web-based support materials?
- Is it useful to other projects?
- Was a standard reference sample included?
Supporting Entrepreneurship
- Customer Discovery - Has the team interviewed a representative number of potential customers for the technology and clearly communicated what they learned?
- Based on their interviews, does the team have a clear hypothesis describing their customers' needs?
- Does the team present a convincing case that their product meets the customers' needs?
- Has the team demonstrated a minimum viable (MVP) product and had customers to commit (LOI, etc.) to purchasing it / using it?
- Does the team have a viable and understood business model/value proposition to take their company to market?
Applied Design
- How well did the project address potential applications and implications of synthetic biology?
- How creative, original, and compelling was the project?
- How impressive was the project installation in the art & design exhibition space?
- How well did the team engage in collaboration with people outside their primary fields?
Supporting Software
- How well is the software using and supporting existing synthetic biology standards and platforms?
- Was this software validated by experimental work?
- Did the team use non-trivial algorithms or designs?
- How easily can others embed this software in new workflows?
- How user-friendly is the software?
New Basic Part
- How does the documentation compare to BBa_K863006 and BBa_K863001?
- How new/ innovative is it?
- Did the team show that it works as expected?
- Is it useful to the community?
New Composite Part
- How does the documentation compare to BBa_K404122 and BBa_K863005?
- How new/innovative is it?
- Did the team show that it works as expected?
- Is it useful to the community?
Part Collection
- Is this collection a coherent group of parts meant to be used as a collection, or just a list of all the parts the team made?
- How does the documentation compare to BBa_K747000 and BBa_K525710?
- Did the team submit an internally complete collection allowing it to be used without any further manipulation or parts from outside Registry?
- Did the team finish building at least one functional system using this collection?
- Did the team create excellent documentation to allow future use of this collection?