When organisations document Absence, Onboarding, and Talent as separate streams, the delivery risk is predictable: each area looks “complete” in isolation, but the experience breaks at the seams.
It’s worth saying upfront: these three areas are simply convenient examples. The same delivery dynamics show up across HCM processes more broadly—anywhere eligibility rules, approvals, security, templates, and data quality need to work together consistently.
The reason is simple. These areas are connected by the same underlying mechanics:
- eligibility and worker categorisation
- security and visibility
- approvals and routing
- template-driven experiences (plans, tasks, journeys)
- data quality and governance
And in Oracle HCM, those mechanics are powerful—but sensitive. A small design ambiguity can translate into a large amount of rework once configuration, testing, and user enablement are underway.
This blog treats Absence, Onboarding, and Talent design artefacts as one integrated design intent—not because they are the only important workstreams, but because they clearly illustrate how cross-cutting HCM design decisions either reinforce each other or conflict. The goal is to show how to convert solution design documents into build-ready decisions that hold up under real operational complexity—without over-engineering the programme.
Start with the “moments” people actually experience
A useful way to unify these three areas is to design around moments rather than modules:
- Day one readiness: what a new joiner sees, completes, and understands
- Time away: how people request time, how managers respond, what exceptions exist
- Growth and performance: how goals are set, progress is discussed, performance is assessed, and development is encouraged
Even if your scope includes other HCM processes, this “moments” approach still works. The specific moments change (for example, job changes, transfers, role updates, service events), but the underlying design questions remain the same:
- Who is eligible for what, and when?
- What needs approval, and what doesn’t?
- What is shown to the employee versus the manager versus HR?
- What templates are reused versus localised?
- What data must be correct for the process to behave predictably?
Absence design that doesn’t collapse under exceptions
Absence configuration often looks straightforward in a design document: define absence types, define plans, decide whether balances apply, and document approvals. The complexity shows up when the business expects the system to handle reality:
- overlapping reasons and “special cases”
- different approval expectations by leave type
- visibility rules (what employees can see, what managers can act on)
- entitlement behaviours that must remain consistent over time
Design principle: separate “absence types” from “absence decisions”
A strong design makes it clear which parts are employee choice and which parts are policy decision. Employee choice: what category they select, what dates they request, what notes they provide. Policy decision: whether approval is required, what documentation is expected, how exceptions are handled.
If those are blended, teams end up with absence types that are overloaded and hard to maintain.
Approvals: decide the pattern before you configure the rule
Absence approvals need an explicit pattern, not just a list: Which requests always require approval? Which requests are informational but still tracked? What happens when the usual approver is unavailable? What should the manager see in order to approve confidently?
Approvals become fragile when the business expects implicit logic that wasn’t written down. Design documents should include approval scenarios (including exceptions), not only a routing statement.
Employee view: treat it as an experience design, not a screen list
Absence succeeds when employees can self-serve confidently. A practical design addresses clarity of naming and categories, guidance at the point of request, what statuses mean (and whether the employee must act), and what managers see that supports a consistent decision.
If you don’t design the experience intentionally, adoption issues get misdiagnosed as “training gaps” when the root cause is ambiguity.
Onboarding that is operationally owned, not just configured
Onboarding design documents typically list tasks, owners, populations, templates, and message content. That’s necessary—but the success factor is governance: who owns the onboarding experience as policies and roles evolve.
Design principle: onboarding is a system of triggers + ownership
Onboarding tends to drift when any of these are unclear: what triggers the onboarding flow (and when it starts), who owns each task type (and what “complete” means), which tasks are mandatory versus contextual, and how communications are maintained as wording changes over time.
The mechanics of tasks and messaging are flexible. The programme risk is leaving those mechanics without an operating model, so the experience becomes inconsistent after go-live.
Action owner is not the same as accountability
Designs often list an “action owner,” but delivery needs a clearer distinction: executor (completes the task) and accountable owner (ensures the task stays relevant, accurate, and monitored).
Without that, onboarding becomes a static checklist rather than a living workflow.
Keep templates controlled
Templates are powerful—until every team creates variants. A strong design defines what is global and reused, what is localised and why, and how changes are approved (so you avoid silent drift).
This matters whenever worker populations differ by category or employment type, because uncontrolled template growth becomes long-term support debt.
Talent design: where structure matters more than features
Talent (goals, performance, competencies, ratings, assessments) is the area where programmes most often “get the features working” but struggle to make outcomes consistent and credible.
Goal plans: define the governance model first
A goal plan isn’t just a container. It implies operating rules: when goals can be created or updated, whether validation rules apply, how goals relate to performance documents, and which populations are included through eligibility rules.
Goal plan behaviour is heavily influenced by how eligibility is defined and how templates are configured. If your design doesn’t make these dependencies explicit, testing becomes a cycle of surprises.
Eligibility profiles: treat them as shared assets, not one-off filters
Eligibility rules show up everywhere: onboarding populations, absence policy application, performance participation, manager interactions, and more.
A practical design approach is to define eligibility profiles as a controlled catalogue: use clear naming standards, avoid duplicated logic in multiple places, document the “reason for existence” (what business rule it represents), and assign ownership for ongoing maintenance.
This reduces the common failure mode where eligibility is re-implemented slightly differently across processes, creating inconsistent outcomes.
Competencies: don’t confuse mapping with meaning
Competency configuration often includes mapping competencies to job contexts. The risk is treating the mapping as the goal, rather than ensuring it is usable.
- Are competencies written in a way that supports meaningful discussion?
- Are they visible where they matter (not buried)?
- Are they aligned to development actions, not just assessments?
When competencies are configured without experience intent, managers see them as “content” rather than practical guidance.
Ratings and assessments: prioritise interpretability and fairness
Rating models and assessment questions are sensitive because they shape perceived fairness.
A durable design addresses consistent interpretation (what each level means in practice), guidance for reviewers (so ratings aren’t arbitrary), and how feedback questions are used (and what happens after they are captured).
The technical setup is the easy part. The design challenge is ensuring the model is understandable, consistently applied, and supportable.
The connecting tissue: approvals, visibility, and evidence-led testing
Approvals must be scenario-designed
If approvals are only documented as “who approves,” you’ll miss delegation and unavailable approvers, exception routing, and what information is required to make decisions confidently. Design approvals as scenarios, then configure.
Visibility must be deliberate
Employee experience and manager effectiveness depend on what they can see and do. Poor visibility design creates unnecessary escalations to HR, inconsistent manager behaviour, and lower trust in the system. Treat visibility as part of solution design, not a post-build tweak.
Testing must prove outcomes, not screens
For these areas, testing must validate eligibility across worker populations, approvals under exceptions, template and task triggers, consistent goal plan participation, and understandable rating guidance/question flow.
When testing is evidence-led, design gaps surface early—before training becomes damage control.
How PCL Typically Addresses This in Practice
PCL typically supports HCM programmes by treating individual processes (like absence, onboarding, or talent) as examples of a broader delivery discipline: designing, governing, and implementing end-to-end HCM capabilities that remain consistent as organisations evolve.
- Stabilising shared foundations early: workforce structures, eligibility logic, security and visibility patterns, approvals, extensibility, and data ownership are aligned early to avoid downstream drift.
- Turning design into build-ready decisions: design statements are translated into decision points that can be configured, tested, and evidenced.
- Designing for real operational scenarios: scenario-based verification confirms behaviour across role changes, worker populations, delegated approvals, edge cases, and lifecycle events.
- Keeping templates and patterns governable: reusable vs localised templates are defined and changes are controlled for maintainability post go-live.
- Aligning testing, training, and data readiness: teams validate end-to-end process behavior with representative (often synthetic) data and role-correct access.
The intent is to make solution design “delivery-grade”: consistent, testable, and supportable—across HCM processes broadly, not only within any single module area.
FAQ
Why design these areas together if they’re only part of the wider HCM scope?
Because they demonstrate shared foundations—eligibility, approvals, security, templates, and data dependencies—that apply across HCM processes. If foundations differ between streams, users experience inconsistency even when each process works “correctly.”
What’s the most common cause of approval problems after go-live?
Approval rules documented as a simple routing statement but not designed for real exceptions—such as unavailable approvers, unusual worker assignments, or unclear decision data.
How do we prevent templates from multiplying across workstreams?
Define a reuse strategy and an ownership model. If teams can create variants without governance, the catalogue expands quickly and becomes difficult to maintain.
What makes goal plans and performance templates hard to stabilise?
They rely on connected choices: eligibility, timing rules, template structure, and interaction between goals and performance documents. Without explicit dependencies, configuration becomes trial-and-error.
How should we think about ratings and assessment questions?
Design them for interpretability and consistent use. If reviewers can’t explain the levels or outcome impact, the model is seen as arbitrary regardless of technical correctness.
Getting the design “right” before it becomes expensive
Absence, onboarding, and talent processes are where employees and managers quickly form an opinion about HR operations. In Oracle HCM, the platform can support robust process design—but only if the programme treats solution design as a set of governed decisions, not a set of tables to configure.
When eligibility, approvals, templates, and visibility are stabilised early, the build becomes more predictable, testing becomes more meaningful, and the experience becomes consistent across the moments that matter—while remaining extensible to the wider HCM landscape.