Section 1 - Instruction

You've now seen two different ways to classify data: k-Nearest Neighbors, which uses proximity, and Decision Trees, which use a series of questions. Both are powerful, but they work in fundamentally different ways. Let's practice telling them apart.

Engagement Message

Ready to get started?

Section 2 - Practice

Type

Sort Into Boxes

Practice Question

Sort these core concepts into the algorithm they are most associated with.

Labels

  • First Box Label: k-Nearest Neighbors
  • Second Box Label: Decision Trees

First Box Items

  • Euclidean distance
  • Majority vote
  • k value

Second Box Items

  • Gini impurity
  • Splitting data
  • Root node
Section 3 - Practice

Type

Multiple Choice

Practice Question

In a k-NN model with k=5, you are classifying a new data point. Its five nearest neighbors have the following labels: [Blue, Red, Red, Blue, Red]. What will the model predict for the new data point?

A. Blue B. Red C. It cannot be determined. D. Both Blue and Red.

Suggested Answers

  • A
  • B - Correct
  • C
  • D
Section 4 - Practice

Type

Fill In The Blanks

Markdown With Blanks

A decision tree is used to decide if you should play tennis. The first question (the root node) is "Is the outlook sunny?". If the answer is yes, the next question is "Is the humidity high?".

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal