It’s late August, and Britain’s school leavers are contemplating with joy or despair their recent A level results. The lucky ones are making plans to go to uni. Others face the drudgery of re-sits or are dragging their deflated ambitions back to the drawing board.
Whether we approve or not, a university degree is still crucial for any young person who wants a professional career. For those from the most disadvantaged homes, university is their ticket out of poverty. This is why the stakes are so high on results day—life-changing dreams and ambitions hang in the balance.
For IntoUniversity, results day is also filled with tension. Higher education progression is a crucial metric, telling us whether our programme has succeeded or not in transforming the opportunities for disadvantaged young people.
In the early days of our ‘impact journey’ the collection of HE progression data looked so easy. Whether someone has attained a university place requires no interpretation, no Likert scales or interviews—you either have got a place or you haven’t. All we had to do was collect the results and produce neat impact charts showing how we were out-performing national benchmarks.
A few years on, and I realise that these were the days of our innocence, when the world of impact measurement was a beautiful place with clear blue skies and the sun shone every day. I am now more inclined to agree with Socrates that most things in life are more complicated than they look, and that true wisdom belongs to those who know their own ignorance.
The problem is that counting stuff is a lot more difficult than it seems. It sounds like a simple task to add up our students and see who’s gone to university, but when is ‘a student’ properly ‘a student’?
Should we count the young people who drop in for a day or two to see our programme and decide not to stay? One argument says ‘no’ because we can hardly take either the credit or the blame for the results of students who haven’t properly completed a full programme component. Another argument says ‘yes’ because we have still had to expend resource of these students.
A third argument says that we need both of these analyses and several more. In reality we have many ‘types’ of student and we need to understand all of them.
What we have gained is a gallery of charts showing what is happening to our young people by various programme combinations, length of engagement, by region and centre—this gives our staff vital information that allows us to develop our programmes and compare outcomes by programme type, duration and region.
We have found that our staff love receiving this data, which allows them to take strategic programme decisions at a local level, developing their own solutions and being creative in finding new ways for the charity to achieve its objectives. This helps to generate a high-energy culture where staff are not the passive recipients of senior management decisions, but active partners in developing the charity’s programmes.