Judging/Rubric categories

iGEM Judging Rubric

Judging is a complex task and can seem mysterious to iGEMers at times. We're aiming to help teams understand how they are evaluated and provide more information ahead of time. While the individual decisions judges make about teams must remain confidential until after the Jamboree, the systems they use do not.

The main mechanism through which iGEM teams are evaluated is called the rubric. The rubric is composed of three main sections:

  1. Medals Section
  2. Project Section
  3. Special Awards Section

Each section is called a category. Within each category, there are 2 - 8 questions that we call aspects (shown below). Each aspect has 6 language choices that covers a range of how the judge evaluating should feel about the quality of the work. Unlike the aspects, these language choices will not be shown. We want iGEMers to know how they are being evaluated, but we don't want to "teach to the test." The language choices correspond to roughly:

  1. Amazing!
  2. Great
  3. Good
  4. Present
  5. Bad
  6. Absent

Each section of the rubric has a separate function and correlates with different awards. The Medals section obviously refers to a team's work convincing the judges they have achieved specific medal criteria. These criteria can be found on the Medals Page and won't be reiterated here.

The Project section is composed of two sub-sections: the main project category and the track-specific category. The main project category has eight aspects while the track-specific category only has two. Combined, these ten aspects determine the scores for the teams who will win their tracks and will also determine the finalist teams. This category is arguably the most important part of the evaluation for an iGEM team.

The final section of the judging rubric determines special awards. Each award has its own category in the rubric with either four or five aspects. This part of the evaluation integrates with the standard page system we have built. To be eligible for an award, teams need to complete the corresponding page on the wiki and fill out a 150 word description on the judging form.

This rubric is the result of more than four years of development, hundreds of hours of discussion, dozens and dozens of meetings, and thousands of emails between some of the most experienced advisers in iGEM. We are continuously improving and tweaking the rubric, but the system we have is extremely effective at selecting for the best teams that represent the values of iGEM.

Project

  1. How impressive is this project?
  2. How creative or novel is the teams project?
  3. Did the project work?
  4. How much did the team accomplish?
  5. Is the project likely to have an impact?
  6. How well are engineering and design principles used?
  7. How thoughtful and thorough was the team's consideration of human practices?
  8. How complete is the team's effort to attribute work?

Track-Specific

Track-Specific - Standard Tracks

  1. Did the team design a project based on synthetic biology and standard parts?
  2. Are the parts functions and behaviors well-documented in the Registry?

Track-Specific - Art & Design

  1. How compelling was the project installation in the art & design exhibition space?
  2. How well did the project address potential applications or implications of synthetic biology?

Track-Specific - Community Labs

  1. Did the team design a project based on synthetic biology?
  2. Did the team interact with another iGEM team either through a collaboration or a mentoring relationship?

Track-Specific - Hardware

  1. Did the team demonstrate utility and functionality in their hardware prototype?
  2. Is the documentation of the hardware system (design files, bill of materials, assembly instructions and/or software) sufficient to enable reproduction by other teams?

Track-Specific - High School

  1. Did the team design a project based on synthetic biology and standard parts?
  2. Did the team interact with another iGEM team either through a collaboration or a mentoring relationship?

Track-Specific - Measurement

  1. Is the team's measurement protocol likely to be of use to the synthetic biology community?
  2. Is the protocol well-documented, including the parts functions and behaviors in the registry?

Track-Specific - Software

  1. How useful is the software to the synthetic biology community?
  2. Is the software designed to be extended and modified by other developers?

Special Prizes

Wiki

  1. Do I understand what the team accomplished?
  2. Is the wiki attractive and easy to navigate?
  3. Does the team provide convincing evidence to support their conclusions?
  4. How complete is the team's effort to attribute work?
  5. Will the wiki be a compelling record of the team's project for future teams?

Presentation

  1. Clarity: Could you follow the presentation flow?
  2. How professional is the graphic design in terms of layout and composition?
  3. Did you find the presentation engaging?
  4. How complete is the team's effort to attribute work?
  5. How competent were the team members at answering questions?

Poster

  1. Clarity: Could you follow the poster flow?
  2. How professional is the graphic design in terms of layout and composition?
  3. Did you find the poster appealing?
  4. How complete is the team's effort to attribute work?
  5. How competent were the team at answering questions?

Integrated Human Practices

  1. Did the team develop and communicate a more nuanced view of their overall project as a result of their human practice (HP) work?
  2. How much did the team accomplish through their HP efforts?
  3. Was the team's HP work integrated with their overall project and its goals?
  4. Is the team's HP work well documented and valuable to others?
  5. Is the team's HP work grounded in previous work and consistent with best practices in the field?

Education and Public Engagement

  1. Did the team demonstrate an innovative educational synthetic biology tool/activity?
  2. Was a dialogue about synthetic biology established between the team and the public?
  3. How much did the team accomplish through their efforts?
  4. Is the tool/activity reusable by other teams, educators, and engagers?
  5. Did the team learn from the interaction with the public?

Model

  1. How impressive is the mathematical modeling?
  2. Did the model help the team understand their device?
  3. Did the team use measurements of the device to develop the model?
  4. Does the modeling approach provide a good example for others?

Innovation in Measurement

  1. Is the measurement potentially repeatable?
  2. Is the protocol well described?
  3. Are there web-based support materials?
  4. Is it useful to other projects?
  5. Was a standard reference sample included?

Supporting Entrepreneurship

  1. Customer Discovery - Has the team interviewed a representative number of potential customers for the technology and clearly communicated what they learned?
  2. Based on their interviews, does the team have a clear hypothesis describing their customers' needs?
  3. Does the team present a convincing case that their product meets the customers' needs?
  4. Has the team demonstrated a minimum viable (MVP) product and had customers to commit (LOI, etc.) to purchasing it / using it?
  5. Does the team have a viable and understood business model/value proposition to take their company to market?

Applied Design

  1. How well did the project address potential applications and implications of synthetic biology?
  2. How creative, original, and compelling was the project?
  3. How impressive was the project installation in the art & design exhibition space?
  4. How well did the team engage in collaboration with people outside their primary fields?

Supporting Software

  1. How well is the software using and supporting existing synthetic biology standards and platforms?
  2. Was this software validated by experimental work?
  3. Did the team use non-trivial algorithms or designs?
  4. How easily can others embed this software in new workflows?
  5. How user-friendly is the software?

New Basic Part

  1. How does the documentation compare to BBa_K863006 and BBa_K863001?
  2. How new/ innovative is it?
  3. Did the team show that it works as expected?
  4. Is it useful to the community?

New Composite Part

  1. How does the documentation compare to BBa_K404122 and BBa_K863005?
  2. How new/innovative is it?
  3. Did the team show that it works as expected?
  4. Is it useful to the community?

Part Collection

  1. Is this collection a coherent group of parts meant to be used as a collection, or just a list of all the parts the team made?
  2. How does the documentation compare to BBa_K747000 and BBa_K525710?
  3. Did the team submit an internally complete collection allowing it to be used without any further manipulation or parts from outside Registry?
  4. Did the team finish building at least one functional system using this collection?
  5. Did the team create excellent documentation to allow future use of this collection?