People analytics metrics are quantitative measures that describe workforce health, efficiency and impact across recruitment, retention, engagement, performance, diversity & inclusion, and wellness.
This guide centres on people analytics metrics and explains what to measure, how to calculate it, and how to make metrics operational so HR teams deliver boardready insights that connect to business outcomes such as revenue per employee and cost-of-vacancy.
Why these metrics matter: they translate HR activity into financial and operational outcomes — for example, lower turnover reduces hiring and productivity loss costs, while faster hiring shortens vacancy-driven revenue gaps.
This guide covers definitions, core formulas, dashboard examples, a practical implementation playbook and direct mappings to MiHCM features so teams can move from measurement to action within weeks.
Who should read this: CHROs and Head of People who need executive-ready metrics; People Analytics and HR Data teams building models and dashboards; HRBPs and managers who require operational KPIs; Talent Acquisition leads improving hiring efficiency; and finance partners who want HR metrics tied to ROI.
People analytics metrics at a glance
Top 5 KPIs every leader should track:
- turnover rate,
- retention rate (by segment),
- time-to-hire (or time-to-fill),
- employee engagement (eNPS or engagement index),
- and revenue per employee.
These five capture hiring cost & speed, workforce stability, engagement drivers and business value.
Immediate wins to build in week 1–4:
- Recruitment funnel dashboard showing applicants → interviews → offers → hires.
- Turnover heatmap by manager and tenure to locate attrition pockets.
- Weekly attendance/absenteeism alert for rising unplanned absence.
Implementation snapshot (6–12 weeks):
- Data audit — 2 weeks.
- Formula validation — 1–2 weeks.
- Dashboard build — 2–4 weeks.
- Pilot & iterate — 2–4 weeks.
Product fit: MiHCM Lite or Enterprise acts as the canonical HRIS; Analytics and MiHCM Data & AI provide dashboards and predictive signals; MiA and SmartAssist surface manager nudges and daily insights to accelerate interventions.
What are people analytics metrics?
People analytics metrics are measurements that quantify workforce activity and outcomes. Organise metrics by functional category so teams can own, measure, and act on them:
| Tuyển dụng | Performance | Retention | Engagement | D&I | Payroll & Cost |
|---|---|---|---|---|---|
| Time-to-hire, applicants-to-hire, quality-of-hire | Productivity, goal achievement, manager ratings | Turnover rate, retention rate, early turnover | eNPS, engagement index, pulse response | Representation, pay equity, promotion parity | Cost-per-hire, payroll spend, revenue per employee |
Soft vs. hard metrics: Hard metrics originate in transactional systems (HRIS, ATS, payroll, time systems): headcount, hires, pay, attendance. Soft metrics require perception-based inputs, typically surveys: engagement, satisfaction and manager effectiveness. Treat soft metrics carefully: ensure sufficient sample sizes and consistent question framing.
Operational vs strategic metrics: Operational metrics — e.g., time-to-fill, absenteeism — are monitored weekly or monthly for immediate action. Strategic metrics — e.g., human capital ROI, internal mobility rate — inform quarterly or annual planning and investment decisions.
Three types of HR metrics
- Descriptive: what happened (headcount, separations).
- Diagnostic: why it happened (correlations between manager tenure and attrition).
- Predictive: what will happen (turnover risk scores driven by historical patterns).
How to prioritise metrics
Prioritise metrics using five filters: business impact, measurability, data availability, ownership and reporting frequency.
Start with high-impact, high-availability measures and map each to an owner and cadence.
Why people analytics metrics are important
People analytics metrics convert HR activity into measurable business outcomes. When linked to financial KPIs, they allow HR to quantify the value of interventions: lowering voluntary turnover reduces replacement and productivity costs; improving time-to-hire reduces vacancy-driven revenue loss.
Risk spotting: systematic tracking surfaces attrition pockets, diversity gaps and productivity declines before they escalate. Decision velocity improves because dashboards provide near real-time signals, shortening the time from insight to action and enabling proactive interventions.
Performance alignment: metrics map individual outcomes to team and company objectives (OKRs). That visibility lets HR prioritise coaching, promotions and resource allocation where they move the needle.
How metrics drive strategic conversations with the C-suite
- Frame HR outcomes as ROI: show cost-per-hire reductions, savings from lower turnover and revenue-per-employee improvements.
- Use segmented results (by function, region, manager) to highlight where investments will yield gains.
- Present predictive signals (attrition risk, time-to-productivity) alongside recommended actions and expected financial impact to secure executive buy-in.
The five key HR metrics every leader should track
Recommended core five metrics:
- Turnover rate — measures separations relative to workforce size; segment by voluntary vs involuntary and by manager.
- Retention rate (segment-based) — percent of employees retained over a window, useful for early tenure cohorts.
- Time-to-hire (or time-to-fill) — speed of hiring; shorter times reduce vacancy costs and lost capacity.
- Employee engagement (eNPS or composite engagement index) — leading indicator for retention and performance.
- Revenue per employee — a productivity/value metric linking headcount to business output.
Why these five: together they capture cost (hiring + turnover), capacity (speed-to-hire), engagement (predicts retention & productivity) and value (revenue per employee). Segment each metric by team, location, tenure and manager to reveal actionable variation and avoid misleading averages.
How to set targets: combine your historical baseline, industry benchmark and growth plan. Assign owners: HRBP for retention, TA lead for time-to-hire, finance for revenue-per-employee. Reporting cadence: weekly operational views for managers, monthly summaries for HR leadership and quarterly strategic reports to the executive team.
Essential formulas and calculation methods
This section gives compact, production-ready formulas and short worked examples you can paste into analytics tools or CSV templates.
Turnover rate (basic): Formula: (Number of separations in period ÷ Average headcount in period) × 100. This is a standard industry formula used by SHRM and CIPD for reporting separations as a percentage of the workforce. (SHRM, 2017). Use separate lines for voluntary and involuntary when available.
Worked example: 18 separations in a quarter; average headcount = 600 → (18 ÷ 600) × 100 = 3.0% quarterly turnover.
Time-to-hire vs time-to-fill: Time-to-hire: days from candidate application or identification to offer acceptance. Time-to-fill: days from job requisition/opening to offer acceptance or start date. These distinctions are common in TA reporting and recommended for clarity. (SHRM).
Worked example: Requisition posted Jan 1; offer accepted Jan 25 → time-to-fill = 24 days. Candidate applied Jan 18 and accepted Jan 25 → time-to-hire = 7 days.
Cost-per-hire: Formula: (Total internal recruiting costs + external recruitment costs + agency fees + onboarding costs) ÷ Number of hires. This approach follows guidance used in HR analytics frameworks. (SHRM, 2022).
Worked example: internal TA salaries & tools = $20,000; job ads & external vendors = $10,000; agency fees = $5,000; onboarding costs = $5,000; hires = 8 → cost-per-hire = ($40,000 ÷ 8) = $5,000.
eNPS (employee Net Promoter Score): Formula: %Promoters − %Detractors. Promoters are respondents scoring 9–10, passives 7–8, detractors 0–6. The result ranges from −100 to +100 and is widely used as a concise engagement proxy. (HBR, 2021).
Worked example: 200 survey respondents: 120 promoters, 50 passives, 30 detractors → eNPS = (120÷200*100) − (30÷200*100) = 60 − 15 = +45.
Revenue per employee: Formula: Total revenue in period ÷ Average number of employees (or FTEs) during period. Use average headcount or FTE to smooth seasonal hiring. This metric is a common productivity proxy in HR reporting. (World Bank, 2024).
Worked example: $12,000,000 revenue in FY; average headcount = 200 → revenue per employee = $60,000.
Absenteeism rate: Formula: (Number of absent days ÷ Total working days) × 100. Commonly used for sickness absence reporting; exclude approved PTO where required. (SHRM, 2023).
CSV-ready cheat-sheet: provide columns for period_start, period_end, separations, avg_headcount, hires, total_revenue, absent_days, total_working_days; then paste formulas into sheet cells to compute the metrics above.
How to choose the right metrics for your organisation
Start by mapping 1–3 primary business objectives (growth, cost-control, product delivery) and select metrics that move those levers. For example, if growth is the priority, emphasise time-to-hire, applicants-to-hire and time-to-productivity; if cost-control is the priority, focus on turnover, cost-per-hire and overtime spend.
Feasibility matrix: Score candidate metrics on two axes: business impact and data availability. Prioritise metrics that are high-impact and high-availability for the initial rollout; schedule lower-availability metrics (e.g., qualitative engagement drivers) for phase 2 once data collection is stable.
Balance leading and lagging indicators:
- Leading: applications per job, engagement trend, manager NPS — useful for early interventions.
- Lagging: turnover rate, revenue per employee — confirm outcomes and ROI.
Stakeholder mapping:
Identify metric consumers and tailor outputs: managers get weekly operational tiles (time-to-hire, absenteeism); HRBPs get monthly segmented reports; CFO receives quarterly human capital ROI and revenue-per-employee analysis.
Pilot approach:
Pick a small set of metrics and one business unit. Validate formulas, assign owners, test dashboard UX with managers, measure against defined success criteria, then scale. Use MiHCM Analytics templates and SmartAssist nudges to accelerate adoption.
Data collection, quality checks and privacy
Canonical sources: payroll, HRIS (MiHCM), ATS, LMS, time & attendance systems, and pulse surveys.
Ensure each metric maps to specific fields: for turnover you need employee id, separation date and reason; for time-to-hire you need requisition open date, candidate apply date, offer acceptance date; for absenteeism you need day-level attendance or absence records.
Data quality checks:
- Completeness: verify required fields are present for >98% of records for initial metrics.
- Duplicates: detect duplicate employee IDs or multiple active records and resolve.
- Consistent identifiers: use a single canonical employee ID (not email) for joins.
- Time-series sanity checks: validate no future-dated hires or negative tenure values.
Dealing with missing or noisy data: Impute pragmatically for non-sensitive numeric fields (e.g., fill missing hire dates with requisition close where appropriate) and flag or exclude sensitive fields (e.g., self-reported demographics) if incomplete. Always track an ‘unknown’ bucket rather than silently filling sensitive attributes.
Privacy & ethics: Limit access via role-based permissions, anonymise when sharing aggregated results, follow applicable regulations (e.g., GDPR, CCPA) and obtain consent for surveys. Store an audit trail of dataset versions, dashboard changes and formula updates.
Governance: Create a data dictionary (field, source, owner, refresh cadence), maintain versioned dashboard definitions and designate metric owners responsible for accuracy and action. Quick wins: run a two-week data audit on MiHCM exports and fix the top three issues before building dashboards.
Visualisation & dashboards
Design philosophy: clarity over complexity — each dashboard should answer one clear question (for example: where is attrition concentrated this quarter?). Use a consistent KPI layout: top-line metric tiles, trendline, segmented breakdowns and an action card that lists owners and next steps.
KPI layout:
- Topline tiles: current value, delta vs prior period, and sample size.
- Trendline: 12–24 month trend to show seasonality.
- Segmentation: breakdowns by team, location, tenure and manager.
- Action card: recommended owner, playbook link and last action taken.
Use-case dashboards:
- Recruitment funnel: applicants, screened, interviewed, offered, hired, time-to-hire, cost-per-hire.
- Attrition & retention heatmap: manager-level turnover, tenure cohorts, voluntary vs involuntary.
- Weekly attendance & overtime: absent days, overtime hours, teams exceeding thresholds.
- Leadership summary (quarterly): top-line people KPIs, trendlines and financial impact estimates.
Visualisation tips:
- Use consistent colours for status (green/amber/red) and avoid truncated axes that mislead.
- Always display denominators (sample size) and, where relevant, confidence or volatility notes for small groups.
- Enable filters for tenure, role and manager and allow drill-down to employee lists with permission checks.
Templates and exports: Provide downloadable dashboard templates and CSV/JSON KPI exports so teams can import into BI tools (Power BI, Tableau) or MiHCM Analytics. Include a KPI layout template (topline tiles, trend, segmentation, actions) and a JSON dashboard example for rapid deployment.
Predictive metrics
High-value predictive signals forecast outcomes that matter: early turnover risk, time-to-productivity for new hires, absenteeism forecasting, and succession risk. Predictive models combine HRIS attributes, behavioural signals and survey data to produce risk scores managers can act upon.
Model inputs:
- HRIS attributes: tenure, past promotion cadence, performance ratings.
- Behavioural signals: LMS completions, timesheet patterns, meeting load.
- Survey signals: engagement trend, manager feedback, onboarding satisfaction.
Validation:
Back-test models on historical cohorts to measure precision and recall. Use a holdout validation set, measure uplift from interventions and set thresholds that balance false positives with actionable workloads for managers.
From signal to action:
- Define risk buckets: low/medium/high.
- Attach playbooks to each bucket (for example: high-risk → 1:1 manager outreach within 7 days + tailored retention offer).
- Monitor intervention lift: measure change in turnover/engagement over the next 90 days.
MiHCM Data & AI use-case
MiHCM Data & AI can surface at-risk employees and predicted absenteeism and present suggested manager actions via SmartAssist. Combine predictive signals with simple experiment designs (A/B rollout across manager groups) to quantify ROI before scaling.
Model validation checklist: Cohort selection, training/holdout split, performance metrics (precision, recall, AUC), baseline comparison and period for uplift measurement.
Implementation roadmap
Use a structured six- to twelve-week approach to move from scoping to pilot and scale.
| Weeks | Activities |
|---|---|
| 0–2 | Scoping & stakeholder alignment: define objectives, metric owners and success criteria. |
| 2–4 | Data audit & canonical source setup: connect MiHCM, ATS, LMS, and time systems; create a data dictionary. |
| 4–8 | Formula validation & dashboard build: run iterative sprints with business users and refine definitions. |
| 8–12 | Pilot one BU: collect feedback, tune definitions and measure initial impact. |
| Ongoing | Governance: quarterly reviews, retrain models, expand metrics, and document changes. |
Rollout checklist:
- Pilot KPIs selected and formula-validated.
- Data owners assigned and data dictionary published.
- Training plan for managers and HRBPs delivered.
- Playbooks attached to each KPI and SmartAssist nudges configured.
Change management: train managers on interpreting metrics, link metrics to discrete playbooks and use SmartAssist or MiA to push timely nudges and suggested actions. Measure adoption (manager logins, actions taken) and iterate dashboards based on usage patterns.
Tools and dashboards
Options for people analytics range from in-house BI (Power BI, Tableau) to specialised people analytics platforms and bundled HRIS analytics. Choose based on data complexity, modelling needs and speed-to-value.
Where MiHCM fits:
- MiHCM Lite/Enterprise: canonical HRIS storing payroll, attendance, performance reviews and headcount.
- Analytics: embedded dashboard layer with prebuilt KPI templates and exports.
- MiHCM Data & AI: predictive modelling for turnover risk, absenteeism forecasting and time-to-productivity estimates.
- MiA/SmartAssist: conversational insights and manager nudges for frontline intervention.
Cost/benefit checklist:
- Speed to value: prebuilt templates reduce build time.
- Security & governance: single HRIS reduces inconsistent definitions.
- Vendor considerations: evaluate model portability and API access if in-house analytics are needed later.
When to choose a specialised tool: large organisations with complex modelling or data-science teams may require bespoke platforms. For most mid-market organisations, MiHCM Analytics plus Data & AI provide the required predictive capabilities and manager-facing adoption features.
Benchmarks, targets and monitoring cadence
Set benchmarks using a combination of internal historical baselines, industry benchmarks and business-plan targets. Historical baselines give realistic starting points; industry benchmarks provide external context; business-plan targets align metrics with strategic goals.
Example target-setting: reduce voluntary turnover by 15% from baseline within 12 months; improve time-to-hire by 20% in six months. For each target, specify the owner, required interventions and expected financial impact.
Reporting cadence:
- Managers: weekly operational reports (attendance, time-to-fill pipeline).
- HRBP: monthly segmented reviews (retention, engagement trends).
- Executive: quarterly summary with financial impact and trend commentary.
Alerts & thresholds: Configure automated alerts for metric breaches (for example, manager-level voluntary turnover above threshold or sudden drops in eNPS). Attach immediate recommended actions to each alert to reduce time-to-intervention.
Continuous improvement: Track leading indicators and measure the lift from interventions in subsequent quarters. Maintain a backlog of metric improvements and test changes in pilots before organisation-wide rollout.
Practical examples and common use-cases
Worked example 1 — Reducing early turnover
- Define cohort: hires within their first 180 days.
- Measure drivers: onboarding NPS, manager tenure, time-to-productivity.
- Intervention: targeted onboarding boost (mentor assignment + manager 30/60/90 check-ins) for high-risk hires.
- Success criteria: reduce 90-day turnover by 30% vs baseline after 3 months.
Worked example 2 — Improving hiring speed
- Analyse TA funnel: identify top-of-funnel drop-offs (sourcing, screening, interview scheduling).
- Intervention: centralise interview scheduling, standardise scorecards and deploy targeted sourcing channels.
- Measure: time-to-offer and time-to-hire improvements and quality-of-hire via 90-day performance.
Worked example 3 — Attendance & productivity
- Detect teams with rising overtime and correlate with performance metrics.
- Intervention: temporary resource rebalancing or hiring and manager coaching on workload prioritisation.
- Measure: overtime hours reduction and impact on quality/productivity next quarter.
Templates: Include CSV formulas, dashboard JSON examples and a one-page manager playbook. Run pilots by manager groups (A/B style) and measure lift against control groups before scaling interventions.
Common pitfalls and how to avoid them
Over-measurement: tracking too many KPIs without owners causes confusion and low impact. Start with a compact set of metrics that map to business outcomes and give each metric an owner and cadence.
Bad definitions: inconsistent formulas across teams produce conflicting numbers. Maintain a single source of truth — a versioned data dictionary with canonical formulas and examples.
Ignoring sample size: small-team fluctuations can look like trends. Always show sample size and confidence notes; avoid taking irreversible action on small-n changes.
Privacy missteps: do not share identifiable predictive scores broadly. Aggregate or anonymise predictive outputs, and limit access via role-based controls.
No actionability: metrics without playbooks create metric fatigue. Attach a documented playbook (owner, steps, timeframe) to each KPI and measure the execution of those steps, not just the metric.
Making people analytics metrics operational
Measure what matters: choose KPIs aligned to business outcomes, assign owners, and set targets based on historical baselines and benchmarks. Instrument the data with canonical sources (MiHCM) and automated refreshes so dashboards remain accurate and actionable. Operationalise metrics by combining dashboards, playbooks and SmartAssist nudges to embed them into managers’ workflows.
Iterate: validate formulas, pilot solutions, measure impact and scale successful interventions.
Actionable next steps (30/60/90 day plan)
- 30 days: run a two-week data audit, publish a data dictionary, select three pilot KPIs.
- 60 days: build dashboards, validate formulas, assign metric owners and run manager training.
- 90 days: pilot with one BU, measure interventions, refine playbooks and prepare scale plan.
Frequently Asked Questions
What are people analytics metrics?
Quantitative measures of workforce health and performance across recruitment, retention, engagement, performance and D&I.
How frequently should metrics be measured?
Which five metrics matter most?
Turnover, retention, time-to-hire, engagement, revenue per employee.
How do you calculate turnover?
Separations ÷ average headcount × 100 (worked example in section 6). (SHRM, 2017)