As part of my assignments in the PROLIX European research project, I have written an extensive document about the achievements and lessons learned. The document has been promoted to final status after internal review within the project and can be downloaded here. It should soon also be available from the project site and through the OBELIX community.
Competencies model in PROLIX and ISO/IEC 24763 compared
This post compares the data model in PROLIX (document D3.3 and especially the upcoming D3.5) with the ISO/IEC 24763 draft (work in progress).
The differences are mainly the used terms. Some translations :
- Action (E1) = task artefact
- Actor (E2) = person artefact
- Competency (E3) = contextualized competency
- Criteria and method (E4) = assessment method
- Environment (E5) = context
- Evaluation, assessment, process (E6) = assessment
- Institution (E7) = organization
- Outcome (E8) = assessment (more comments below)
- Role (E9) = task artefact
Missing in this model (and supported by PROLIX) :
- Competency structure and rollup rules
- Learning artefact
- Proficiency levels
- Matching related data like matching profile, confidence score, non-competency-related criteria
- User interface and/or grouping related data like artefact categories, extra info about artefact kind, application domain
- Artefact structure, a role consists of tasks
Some important differences include :
In PROLIX we apply the context at the competency level. We assume tasks are always or-ganization specific and thus contextualized (linked with the “environment” in ISO terms). The reason for making this explicit at the competency level is to give us more options for the competency matching. In our model, a task or role can require competencies from different contexts. This gives more modelling freedom for the context and allows more reuse. It is also designed to facilitate cross organization and cross context matching of competencies.
In PROLIX we don’t explicitly have the concept “outcome”. We do promote registering the completion of a task as “assessment” as this may indicate that you have (some of) the re-quired competencies, especially when you performed the task often. (Note that you probably want to assign an assessment method with relatively low confidence).
When studying, outcomes are often diplomas or certificates. These are not registered as such within PROLIX as these are not competencies.
This becomes clear when matching. For jobs there is often a requirement that you have a certain diploma. When a person has the diploma, it does not guarantee presence of the competencies as they may have been (partially) lost or forgotten. Similarly, certain certificates may be required which can expire. For example, a lifeguard may need a certificate which is less than two years old. This again does not mean that the competencies required to get the certificate are suddenly “lost” when the term expires.
In PROLIX this kind of requirements should be handled by setting non-competency-related criteria for matching.
The “actor shows competency” mentioned in the ISO model is implicit in PROLIX. As evidence distillation based on a matching profile is used, the presence is computed based on the assessments. There is no way in PROLIX to directly assign a competency, it is always done through an assessment and the matching profile which is used for the evidence distillation then determines what the impact is of the assessment on the computed competency profile.
This has major advantages when doing job matching for example. You can start by selecting persons using a very strict profile which only allows the most reliable assessments to be considered. When this does not result in suitable matches, you can relax your matching profile and include more assessments.