Charts
My First Splunk Chart
Charts in Splunk are powerful visualization tools that transform raw data into meaningful insights, facilitating data analysis and decision-making in information technology. Splunk charts allow IT professionals to monitor, analyze, and visualize data from diverse sources in real-time. These visual representations include line charts, bar charts, pie charts, scatter plots, and more, each serving specific analytical purposes.
​
In IT, charts in Splunk are extensively used for performance monitoring, security analysis, and operational intelligence. For instance, line charts can display trends over time, helping in tracking system performance metrics like CPU usage, network traffic, or error rates. Bar charts can compare categorical data, such as the number of incidents per server or user activity across different departments. Pie charts provide a snapshot of data distribution, useful for visualizing resource allocation or event proportions.
Scatter plots and bubble charts enable the correlation of multiple variables, aiding in identifying patterns or anomalies that might indicate security threats or operational issues. Additionally, Splunk's ability to create custom dashboards with multiple charts provides a holistic view of IT infrastructure, facilitating proactive management and quick response to potential problems. Overall, charts in Splunk empower IT teams to make data-driven decisions, enhance system reliability, and improve security posture.

Splunk Chart Overview
One of the key elements of charts is knowing what you want to display. Having a good plan and mapping out what you require will help greatly when you start designing your custom SPL queries. Charts are the cornerstone of dashboards and having a good understanding of designing SPL queries for charts will help greatly when putting together charts with related categories of datasets into dashboards.

Helpful Links
stats
The stats command in Splunk is a versatile and powerful tool used for aggregating and summarizing data. It allows users to compute various statistics on the events in their datasets, enabling them to extract meaningful insights.
​
​Common Functions
count: Counts the number of events.
sum: Sums the values of a field.
avg: Calculates the average value of a field.
min: Finds the minimum value of a field.
max: Finds the maximum value of a field.
median: Calculates the median value of a field.
stdev: Calculates the standard deviation of a field.
distinct_count or dc: Counts the distinct values of a field.
Base Syntax:
​
stats [function("Field") as "new_field_name"] BY "field"
​
​​
Examples:
​
index=web_logs | stats count
​
index=web_logs | stats sum(bytes) as total_bytes
​
index=web_logs | stats avg(response_time) as avg_response_time
​
index=web_logs | stats count by status
​
​
Dashboard Draft
I drew this up in MS Paint to get a idea of what I wanted to end up with.


Network by IOS Image
index="network_hardware" DeviceName=$mydevicename$ IPAddress=$myipaddress$ SerialNumber=$myserialnumber$
| dedup DeviceName
| table DeviceName, IPAddress, MachineType, Model, IOSImage, IOSVersion, LastBoot, SerialNumber, SNMPVersion| stats | count by IOSImage
The SPL in the query above in yellow is a specific type of SPL called a Token. I have search boxes on my dashboards and when users enter informaiton it populates the Token's and then that is added to the search query.
​
Example: If somone types in the IP Address textbox on my Dashboard the string they typed it stored in the $myipaddress$ token. The token is defined when adding the textbox element from the top menu of the dashboard editor.
Note: There are classic dashboards and dashboard studio dashboards. I use the studio version.

Top & Rare
In Splunk Search Processing Language (SPL), the top and rare commands are essential for data analysis and visualization. The top command identifies the most frequently occurring values in a specified field, making it useful for quickly finding dominant trends or common patterns within large datasets.
Conversely, the rare command highlights the least frequently occurring values, which can be crucial for spotting anomalies or rare events that might otherwise go unnoticed. Both commands are invaluable for efficiently summarizing and interpreting complex data, enabling users to gain actionable insights with minimal effort.