Welcome back! We have reached the final lesson of our course. So far, you have learned how to turn a project specification into a technical plan, how to design atomic tasks, and how to build systems with multiple components.
In this final section, we focus on efficiency. Now that you know how to break down work, you need to know how to organize those pieces to finish the project as quickly as possible. We will learn how to run tasks at the same time, how to pass work between tasks, and how to fix a plan when things go wrong.
Practice Environment Note:
These exercises use simplified Python classes to demonstrate parallel execution concepts. In production systems, you'd apply the same earliest-start scheduling and dependency analysis to real microservices, CI/CD pipelines, or distributed systems. The calculation methodology (critical paths, parallel opportunities) is universal.
When we talk about parallel execution, we mean working on multiple tasks at the same time. In a professional setting, this might involve different developers working on different parts of a feature. Even if you are working alone, understanding parallelism helps you identify which parts of your project are independent.
To run tasks in parallel, they must have no dependencies. A dependency is simply a requirement that one task must be finished before another can start.
Imagine we are building a Comments feature. Here is a simplified task list:
In this example, T002 and both need the database model () to exist. However, the Repository (which communicates with the database) and the Schema (which defines how data looks in the API) do not need each other.
The key to calculating project timelines correctly is understanding when each task can actually start:
Rule 1: Independent Tasks Start Immediately Tasks with no dependencies can begin at time 0.
Rule 2: Dependent Tasks Start When ALL Dependencies Finish A task starts as soon as its slowest dependency completes.
Rule 3: Tasks Don't Wait for "Waves" Common misconception: "All Phase 1 tasks must complete before any Phase 2 task starts." Reality: As soon as a task's specific dependencies finish, it can start immediately.
Let's see why this matters:
The wave model forces T3 to wait unnecessarily for T2 to finish, even though T3 only depends on T1.
Here's how to calculate the minimum time to complete all tasks:
Step 1: For each task, calculate its earliest start time
- If no dependencies: start time = 0
- If has dependencies: start time = MAX(finish times of all dependencies)
Step 2: Calculate earliest finish time
- finish time = start time + task duration
Step 3: Project completion time
- MAX(all finish times)
Example:
Let's look at a more complex example: building a Real-Time Notification System. This system requires three independent components that can be built in parallel before integrating them together:
How This Works:
- Track 1 (WebSocket): Tasks
T001-T002build the WebSocket connection layer independently. - Track 2 (Redis): Tasks
T003-T004set up the Redis messaging infrastructure independently. - Track 3 (Event Publishing): Tasks
T005-T006create the event structure independently. - Integration: Once all three tracks complete (at 90 min), tasks
T007-T008can begin.
When tasks are split up, the biggest risk is that the parts will not fit together at the end. We prevent this by using explicit handoff instructions.
A handoff happens when one task relies on code created in a previous task. You must tell Claude exactly where that code is and how to use it. Let's look at an example where we need to build an AttachmentService that uses an S3Client created in a previous step.
First, let's look at the "existing" code created in a prior task (T003):
This is our starting point. When we write the prompt or the acceptance criteria for the next task (T005), we should not simply say, "Build a service to save files." Instead, we provide a clear handoff:
Now, let's see how the code is built following that handoff:
By being specific about the file path (src/storage/s3_client.py) and the method name (), we ensure the AI does not attempt to reinvent the storage logic or create a different, incompatible version of the client.
Sometimes, a plan looks good on paper but fails during execution. These are called anti-patterns. Recognizing them early saves hours of frustration.
1. The Too Coarse Task A task is too coarse if it attempts to do everything at once.
- Bad:
T001"Build authentication system." (This includes models, logic, security, and APIs). - Fix: Split it.
T001: User Model,T002: Token Logic,T003: Login API.
2. The Too Granular Task This is the opposite problem — splitting things so small that you spend more time managing tasks than coding.
- Bad:
T001"Add import statements,"T002"Define class name,"T003"Add one field." - Fix: Merge related logic into one functional unit, such as "
T001: Create User Model with all fields and validation."
3. The Backward Dependency This happens when you try to build the "roof" of a house before the "foundation."
- Bad:
T001: API endpoints ->T002: Database Repository. - Fix: Reverse the order. You cannot effectively test an API endpoint if the data layer it depends on does not exist yet.
Even with a perfect plan, things can go wrong. If a task fails or takes too long, you need a recovery strategy.
Scenario 1: The Task is taking too long. If a task takes more than 90 minutes, the "scope" is likely too large. Do not force the AI to keep trying.
- The Move: Stop the execution. Commit the code that is already working. Create a new "Part B" task to finish the rest.
Scenario 2: Validation Fails (Tests are failing). If the tests do not pass, check if the task is too complex.
- The Move: Identify the "split point." If a task attempts to validate data and save it to a database, split those into two tasks:
T00Xa (Validator)andT00Xb (Repository).
Scenario 3: Import Errors. If Claude encounters errors stating that a file is missing, your dependency graph might be incorrect.
- The Move: Pause and check if the prerequisite task was actually completed. If you missed a step, stop and execute the missing prerequisite before retrying the current task.
In this lesson, we covered how to optimize your project timeline using parallel execution with proper earliest-start scheduling, and how to ensure different tasks connect perfectly through clear handoffs. We also looked at how to spot and fix bad task plans.
What transfers to production:
- Earliest-start scheduling works identically for microservices, CI/CD pipelines, distributed systems
- Dependency analysis applies whether coordinating developers, AI agents, or automated builds
- Critical path calculation determines minimum project duration regardless of scale
- Handoff patterns prevent integration failures in any team structure
You are now ready for the final practices:
- In
Task 1, you will implement an earliest-start scheduler and calculate critical paths correctly. - In
Task 2, you will plan aBulk Status Updatefeature using parallel tracks to save time. - In
Task 3, you will design a complexReal-Time Notificationsystem with three independent workstreams.
On CodeSignal, the necessary libraries for these tasks are already installed for you, so you can focus entirely on the logic and the decomposition.
You have completed the lessons for this course. Now apply what you've learned in the final practices. You now have the skills to take a complex professional requirement and break it down into a clean, executable, and highly efficient plan that an AI can help you build. Happy coding!
