In the previous lesson, you learned how to transform individual data points using calculations to create derived metrics like CPU headroom. Every measurement in your database became a point on your graph — if you had measurements every 10 seconds, Grafana plotted all of them. This works beautifully when you're viewing short time ranges like the last hour or the last six hours.
But now consider what happens when you want to view the last 30 days of CPU usage or the last 3 months. If your database records measurements every 10 seconds, that's 259,200 data points for 30 days. Querying and plotting every single measurement creates two serious problems: your queries become slow because PostgreSQL has to retrieve and send hundreds of thousands of rows, and your graphs become cluttered because Grafana tries to draw every individual point even though your panel is only about 1,000 pixels wide.
This is where time bucketing becomes essential. Instead of plotting every individual measurement, you group multiple measurements into time intervals (called buckets) and show the average value for each bucket. When viewing 30 days of data, you might bucket measurements into 1-hour intervals, reducing 259,200 data points down to 720 points — one per hour. Your graph remains smooth and readable, your queries run fast, and you still see the overall trends clearly.
The brilliant part about Grafana is that it calculates the optimal bucket size automatically based on your selected time range. You don't have to manually adjust your queries when switching from viewing 1 hour to viewing 1 month — Grafana figures out whether you need 10-second buckets, 5-minute buckets, or 1-hour buckets. This lesson will teach you how to leverage Grafana's automatic time bucketing so your panels stay fast and responsive no matter what time range you're viewing.
Every Grafana dashboard and panel has a Time Picker control located in the top-right corner of the interface. You've been using it already — when you select "Last 6 hours" or "Last 24 hours," you're using the Time Picker. This control determines what time range your queries will filter on, and the $__timeFilter(ts) macro in your WHERE clause automatically respects that selection.
What you might not have noticed yet is that the Time Picker has a much broader purpose beyond just filtering data. It fundamentally changes how Grafana expects your query to behave at different scales. When you select "Last 5 minutes," Grafana assumes you want to see fine-grained detail — every measurement matters because you're investigating something happening right now. But when you select "Last 90 days," Grafana assumes you want to see trends and patterns, not individual measurements from two months ago at 3:47 PM.
This is the key insight: longer time ranges need fewer, larger time buckets. If you're troubleshooting a current issue over the last 15 minutes, you want second-by-second detail. If you're presenting a quarterly capacity planning report, you want daily or weekly averages. The Time Picker signals this intent, and Grafana adjusts accordingly through a special variable called $__interval that automatically calculates the optimal bucket size for your current time range and panel width.
The $__interval variable is one of Grafana's most powerful features for building production-quality dashboards. It's a variable that Grafana calculates automatically every time your panel loads based on two factors: the time range you've selected in the Time Picker and the width of your panel in pixels. You never have to set this variable yourself — Grafana does all the math and makes it available for your queries to use.
Here's how it works in practice. When you select "Last 5 minutes" in the Time Picker and your panel is about 1,000 pixels wide, Grafana calculates that $__interval should be something like "10s" (10 seconds). If you then change the Time Picker to "Last 24 hours," Grafana recalculates $__interval to something like "5m" (5 minutes), because grouping 24 hours into 5-minute buckets gives you 288 buckets — a reasonable number for smooth visualization.
The magic of $__interval is that it scales intelligently. Here are some real examples:
- Last 5 minutes:
$__intervalmight be "10s" — you get 30 buckets of 10 seconds each - Last 1 hour:
$__intervalmight be "1m" — you get 60 buckets of 1 minute each - Last 24 hours:
$__intervalmight be "5m" — you get 288 buckets of 5 minutes each - Last 7 days:
$__intervalmight be "30m" — you get 336 buckets of 30 minutes each - Last 30 days:
$__intervalmight be "2h" — you get 360 buckets of 2 hours each
Notice how the bucket size grows as the time range increases, but the number of resulting buckets stays in a reasonable range for visualization. Grafana is optimizing for both query performance and graph readability automatically.
Now that you understand what $__interval represents, let's talk about how to actually use it in your queries. This is where Grafana's $__timeGroupAlias() function comes into play. This special function takes your timestamp column and the $__interval variable and produces time buckets that Grafana can plot on your time-series graphs.
Time bucketing means combining multiple measurements that fall within the same time interval into a single data point. If you have CPU measurements recorded every 10 seconds and you're bucketing into 5-minute intervals, each bucket will contain 30 measurements. But you can't just combine those 30 measurements without deciding how to represent them — you need to aggregate them using a function like AVG(), MAX(), or SUM().
For most infrastructure metrics, averaging makes the most sense. If your CPU usage measurements within a 5-minute bucket are 72%, 75%, 73%, 77%, 74%, etc., the average gives you a representative value for that entire 5-minute period. This is why you'll see AVG(usage) in bucketed queries instead of just usage like you've used before.
The $__timeGroupAlias() function handles the complex SQL needed to create these time buckets and format the results in a way Grafana expects. When you write $__timeGroupAlias(ts, $__interval) in your query, Grafana expands this into SQL that rounds your timestamps to the nearest interval boundary.
Here's the important part about the GROUP BY clause. When you create time buckets with and you're also showing multiple metrics (like different hosts), you need to group by both the time bucket and the metric. The pattern means "group by the first column (the time bucket) and the second column (the metric)." This ensures that within each time bucket, you still maintain separate lines for each host rather than accidentally averaging across all hosts.
Let's walk through the time-bucketing query line by line so you understand how all the pieces work together:
Line 1: $__timeGroupAlias(ts, $__interval) calls Grafana's time grouping function with your timestamp column and the auto-calculated interval variable. This creates time buckets and formats them for visualization.
Line 2: host AS metric aliases the host column so Grafana knows to create separate lines on your graph. If you have host-a, host-b, and host-c, your panel will display three distinct lines.
Line 3: AVG(usage) AS value is where the actual bucketing happens. Instead of selecting usage directly, you're calculating the average of all usage values within each bucket. If your 5-minute bucket contains 30 measurements ranging from 70% to 78%, AVG(usage) will compute something like 74% as the representative value.
Line 4-5: The FROM and WHERE clauses remain the same — you're pulling data from the public.metrics_cpu table and filtering with $__timeFilter(ts) to respect the Time Picker selection.
Line 6: GROUP BY 1, 2 groups by the time bucket (position 1) and the host metric (position 2). This tells PostgreSQL: "Group all rows that have the same time bucket AND the same host together, then calculate the average usage for each group."
You've just learned the final piece of building responsive Grafana dashboards: automatic time bucketing with $__interval and $__timeGroupAlias(). This technique ensures your queries perform well and your visualizations remain clear regardless of what time range users select.
Let's recap your complete journey through this course. You started by connecting Grafana to a PostgreSQL database and exploring the available metrics tables. You built your first time-series panel showing CPU usage over time, discovering how Grafana's special macros simplify query writing. You created Stat Panels to display single summary values, learned to compare multiple hosts on a single panel, and calculated derived metrics like CPU headroom. And now you've mastered time bucketing with automatic interval adjustment, the technique that makes dashboards perform well at any scale.
You now understand the fundamentals of Grafana visualization with SQL data sources. You know how to write queries that respect Grafana's time range picker, how to structure data for different panel types, how to create clear metric labels, and how to make your panels responsive to different viewing contexts. These skills form the foundation of effective observability dashboards that help teams monitor infrastructure, troubleshoot issues, and make data-driven decisions.
In the upcoming practice exercises, you'll apply everything you've learned by building complete panels with time bucketing. You'll experiment with different time ranges to watch $__interval adjust automatically, create bucketed visualizations for memory and network metrics, and combine time bucketing with filtering to build focused monitoring panels.
Congratulations on completing Grafana Fundamentals with PostgreSQL Metrics! You've gained practical skills that translate directly to real-world monitoring scenarios. Whether you're visualizing application metrics, infrastructure data, or business KPIs, the patterns you've learned here apply universally. Keep building, keep experimenting, and keep making your data tell compelling stories through visualization.
