The seventh challenge edition is being organized as part of ISBI 2024 held in Athens, Greece, in May 2024. In this edition, the primary focus is put on methods that exhibit a high level of generalizability and work across 13 existing datasets, instead of developing methods optimized for one or a few datasets only. To complement the existing segmentation-only and segmentation-and-tracking benchmarks, a new linking-only benchmark is introduced, allowing objective evaluation of object-linking methods over standardized, yet imperfect segmentation inputs. The submissions of four different types are going to simultaneously be collected, evaluated, and announced at the corresponding ISBI 2024 challenge workshop according to the following schedule:
December 22nd, 2023
The registration period for the seventh challenge edition is opened
January 8th, 2024
Detailed instructions on linking-only submissions and linking-only evaluation routines are released
March 18th, 2024 April 2nd, 2024
The registration period for the seventh challenge edition is closed
March 25th, 2024 April 6th, 2024
Deadline for submitting results to individual tracks, including command-line executables of the algorithms used, their detailed descriptions and parameter configurations followed
March 26th, 2024 - April 1st, 2024 April 7th, 2024 - April 14th, 2024
The received submissions are checked for completeness and consistency
April 6th, 2024 April 20th, 2024
Deadline for revising incomplete or inconsistent submissions
April 7th, 2024 - May 26th, 2024
April 21st, 2024 - May 26th, 2024
Validation and evaluation of the submissions received by rerunning the algorithms on our evaluation servers
April 27th, 2024
May 11th, 2024
Deadline for submitting reusable versions of competing algorithms
May 27th, 2024
ISBI 2024 Challenge Workshop
June-August 2024
Preparation of a manuscript with a detailed analysis of the collected results
The registered participants compete over the set of 13 real datasets (eight 2D+t and five 3D+t ones), with complete gold tracking truth, and gold and silver segmentation truths available for the training datasets. An expected submission consists of a set of six segmentation-and-tracking results per test dataset, created using the same approach with parameters/models optimized/trained using each of the six following training data configurations: gold segmentation truth per dataset, silver segmentation truth per dataset, a mixture of gold and silver segmentation truths per dataset, gold segmentation truths across all the 13 datasets, silver segmentation truths across all the 13 datasets, and a mixture of gold and silver segmentation truths across all the 13 datasets. Other than these training data configurations cannot be exploited. Please note that all generalizable submissions to the Cell Tracking Benchmark (Track 01) are automatically treated as generalizable submissions to the Cell Segmentation Benchmark (Track 02) too.
The performance of a particular algorithm for a given test dataset and training data configuration is primarily evaluated using the DET, SEG, TRA, OPCSB, and OPCTB measures. Furthermore, the biological performance of the algorithm, evaluated using the CT, TF, BC(i), and CCA measures, is provided as complementary information. The overall, measure-specific performance of the algorithm used for its ranking is then obtained by averaging its measure-specific performance scores over all the included test datasets and training data configurations.
Apart from submitting a set of 78 segmentation-and-tracking results for the 13 included test datasets, the participants must provide command-line versions of their algorithms used to produce the submitted results, thus allowing the challenge organizers to validate all submitted results by rerunning the algorithms on the test datasets on their own, disclose descriptions of the algorithms used, including the details on parameter configurations chosen and the training protocols followed, and prepare their algorithms in a reusable form. The submission instructions are the same as for regular submissions to the Cell Tracking Benchmark, with the exception of output subfolder and entry file names that reflect the training data configuration used. For more details, please check the last section of this document.
The registered participants compete over the set of 13 real datasets (eight 2D+t and five 3D+t ones), with complete gold tracking truth, and gold and silver segmentation truths available for the training datasets. An expected submission consists of a set of six segmentation-only results per test dataset, created using the same approach with parameters/models optimized/trained using each of the six following training data configurations: gold segmentation truth per dataset, silver segmentation truth per dataset, a mixture of gold and silver segmentation truths per dataset, gold segmentation truths across all the 13 datasets, silver segmentation truths across all the 13 datasets, and a mixture of gold and silver segmentation truths across all the 13 datasets. Other than these training data configurations cannot be exploited.
The performance of a particular algorithm for a given test dataset and training data configuration is evaluated using the DET, SEG, and OPCSB measures. The overall, measure-specific performance of the algorithm used for its ranking is then obtained by averaging its measure-specific performance scores over all the included test datasets and training data configurations.
Apart from submitting a set of 78 segmentation-only results for the 13 included test datasets, the participants must provide command-line versions of their algorithms used to produce the submitted results, thus allowing the challenge organizers to validate all submitted results by rerunning the algorithms on the test datasets on their own, disclose descriptions of the algorithms used, including the details on parameter configurations chosen and the training protocols followed, and prepare their algorithms in a reusable form. The submission instructions are the same as for regular submissions to the Cell Segmentation Benchmark, with the exception of output subfolder and entry file names that reflect the training data configuration used. For more details, please check the last section of this document.
The registered participants compete over the set of 13 real datasets (eight 2D+t and five 3D+t ones), with complete gold tracking truth and imperfect segmentation masks available for the training datasets. An expected submission consists of a set of 13 segmentation-and-tracking results for the 13 included training datasets, created using the same approach with no limitations on the training data configurations used. For more detailed information, please check this document.
The performance of a particular algorithm for a given test dataset is primarily evaluated using the LNK, BIO, and OPCLB measures. Furthermore, the detailed biological performance of the algorithm, evaluated using the CT, TF, BC(i), and CCA measures, is provided as complementary information. The overall, measure-specific performance of the algorithm used for its ranking is then obtained by averaging its measure-specific performance scores over all the included test datasets.
Apart from submitting a set of 13 segmentation-and-tracking results for the 13 included training datasets, the participants must provide command-line versions of their algorithms used to produce the submitted results, thus allowing the challenge organizers to validate all submitted results by rerunning the algorithms on the training datasets on their own and to generate results for evaluation by running the algorithms on the test datasets, disclose descriptions of the algorithms used, including the details on parameter configurations chosen, the training data used, and the training protocols followed, and prepare their algorithms in a reusable form. The submission instructions are the same as for regular submissions to the Cell Linking Benchmark. For more detailed submission instructions, please check this document.
The registered participants compete over 2D+t and 3D+tdatasets of their choice, with complete gold tracking truth and imperfect/perfect segmentation masks available for the real/computer-generated training datasets. An expected submission consists of one segmentation-and-tracking result per training dataset, with the possibility of using different algorithms for different datasets and no limitations on the training data configurations used. For more detailed information, please check this document.
The performance of a particular algorithm for a given test dataset is primarily evaluated using the LNK, BIO, and OPCLB measures. Furthermore, the detailed biological performance of the algorithm, evaluated using the CT, TF, BC(i), and CCA measures, is provided as complementary information.
Apart from submitting segmentation-and-tracking results for the chosen training datasets, the participants must provide command-line versions of their algorithms used to produce the submitted results, thus allowing the challenge organizers to validate all submitted results by rerunning the algorithms on the training datasets on their own and to generate results for evaluation by running the algorithms on the test datasets, disclose descriptions of the algorithms used, including the details on parameter configurations chosen and the training protocols followed, and prepare their algorithms in a reusable form. For more detailed submission instructions, please check this document.