Password for access is antelope
***Post written by Sarah Perkins, WSPA President***
This year, I was lucky enough to attend the NASP Convention in Baltimore. It was my first time attending a NASP Convention and I am so glad I went. Two sessions stood out to me and provided information that I could immediately integrate into my current practice.
The first session was “Cognitive Profiling in School Psychology: A Challenging History” by Ryan J. McGill, Stefan Dombrowski, and Gary L. Canivez. The focus of this session was on the statistical and mathematics strengths and weaknesses of various scores obtained in a cognitive assessment. The overall conclusion was that there are no scores except the overall cognitive ability that has any real meaning. That means that index scores and definitely subtest scores are not worth interpreting as their variance can be largely explained by chance and by the overall score.
Now I have not personally dug into their research to see if this holds up under scrutiny. If it does, however, it has definite implications for our reports and our evaluative focus. It also has huge implications for the future of Specific Learning Disability qualifications. While Wyoming is still largely a discrepancy state, it would appear that the other models of qualification, like patterns of strengths and weaknesses, may not be valid, if these presenters are correct. This presentation was based on this paper if you want more information about it. If you are not able to download the paper due to a paywall, the presenters said they would be more than happy to send a copy to anyone who emails them.
The second presentation that I keep thinking about was called “Writing Useful and Legally Defensible Psychoeducational Reports” by Jeanne Anne Carrier and Michael Hass. This was based on a book by the presenters so the following will be a very abbreviated version.
The gist of the presentation was that reports should be based entirely on the referral questions. In fact, the headings in our report should be those questions (for example, “What are John’s cognitive strengths and weaknesses?”). Under each of those questions, we should include information from a review or records, observation, interviews, and testing.
The one area where I was confused is that the presenters said, when I asked about social workers gathering social history information, that it was my job to gather all of this information into my report. I am unclear whether they mean that I should do most if not all of the evaluation with very little work by my team members, that I should synthesize and interpret the work of my colleagues, or that we should all be writing on one collaborative report. Regardless, this would be a huge change in the functioning and culture of all of my SPED teams.
What do you think of these both of these ideas? How could this change our practice?
Hi Sarah, thanks for highlighting some of the current issues in our field addressed at the convention. Here are of my thoughts:
Regarding SLD identification, I thought I would share some additional resources in this area for folks in the process of learning more. I know I am still learning! The first one is a paper written by several top researchers, some of which are authors in the paper you linked:
The second link is for additional articles around this topic:
I think the topics of useful and defensible reports and the social history are inter-related. School psychologists should conduct comprehensive evaluations that are useful and relevant to the development of programming for students. I am not sure I can conduct a comprehensive evaluation and make relevant recommendations regarding a disability if I have not considered the developmental history of a child and its impact on the child’s current functioning. If I rely on someone else to gather that information, I still need to make sure I weave that into my evaluation and findings. This leaves me with the following options: gathering the data on my own; utilizing someone else’s data in my report; writing a multidisciplinary report with my colleagues; or writing a meaningful summary of all our findings, and placing this summary into the eligibility paperwork.
Regardless of which option is chosen (which I feel comes down to team preference and skill), all data should be triangulated at the eligibility meeting so the team can make the best decision for a child. Too often, we have psych reports that never address past or present level of functioning outside of our standardized assessments, or we produce data that, when taken at fave value, is not relevant to what is happening in the classroom but often that data is used as the sole basis for eligibility determination. I see both as ethical missteps in our profession. Eligibility truly has to be a multidisciplinary process.
Just some airport musings as I wait for my flight. Thanks for the post, Sarah!
I agree with both points that you made. I see the evaluation process and more specifically the assessment tools as a method used to validate or refute the referral question. The referral question should include the suspected disability classification and need for SPED instruction. I put a lot of weight in the record review (historical grades, state/district assessments, intervention data) along with parent, teacher, and student inputs. As far as interviews, I do not see the value in a complete social history. Instead, ask for specific academic, behavioral, and social strengths and weaknesses, ask what teaching styles or accommodations work the best, and what in their opinion is causing the academic struggles and what needs to be done to change it. The assessment scores are not as meaningful if they are not also seen in the student’s academic performance. An IQ test may be interesting in explaining strengths and weaknesses but in the end, you still have a student who is struggling academically. You want to determine if the student’s struggles are severe enough to require specialized instruction.