Hi @SharonE,
Yep, you're absolutely right. The reason for this is that we ran a new results analysis process over the weekend.
A brief summary of the analysis process:
A LibCrowds project is a series of similar tasks. For each transcription task, we're asking for three people to transcribe the same fragment of text. As the final transcription is submitted, the analysis process is triggered, the transcriptions are normalised and then compared to see if they match. If our normalised transcriptions match we mark the task as complete and store our final result. If they don't match we mark the task as requiring another contribution and send it back into the task queue. Once we receive an additional contribution the above process is triggered again.
This process has been running on all of our live tasks for a few weeks now but we needed to run it retrospectively on the projects that were completed before it was in place. This should be a one off thing.
Hopefully that makes some sense, I plan on writing up a bit more about this soon. In the meantime, any questions are welcome!