Tagged: Data Sources, Evaluation, Faculty, Organizations, Quality
- This topic has 7 replies, 3 voices, and was last updated 3 years, 9 months ago by
Shelly Wyatt.
-
AuthorPosts
-
-
January 12, 2017 at 3:13 pm #905
On your campus, there may be multiple sources of data available to you when making decisions about faculty development goals or when evaluating faculty development courses; these sources include your office of Institutional Research (IR), faculty surveys, course grade book and assessment data, as well as course analytics (available through your learning management system). What sources of data have been most helpful to you in assessing opportunities for faculty development?
-
February 2, 2017 at 2:52 pm #1162
We have used each of the above sources in several ways to help us develop training for our online instructors. An additional source we’ve used is the end of course student surveys (or whatever your school calls them). We discovered that our online students consistently rated our instructors lowest in discussion board interaction, general communication throughout the course, and getting graded assignments returned on time. So, we developed training to address these issues and while we’ve seen significant improvement over the past couple years, we continue to work to utilize the data that we gather to help us develop even more effective trainings to help drive the development opportunities.
2 users thanked author for this post.
-
February 14, 2017 at 3:58 pm #1190
HI Andrew and thanks for posting! End-of-course surveys do provide important feedback although, in online classes, I can see where faculty could easily dismiss the significance of the feedback because they lack the face-to-face context. I was wondering: do you share the overall feedback (trends) from students regarding online courses? I ask this because faculty might be able to see patterns in the feedback (e.g., instructor presence in online discussions) and see where there own students’ feedback compares to students in other classes.
-
February 15, 2017 at 11:04 pm #1192
Hi Shelly,
This is an excellent question.
We utilize a third party vendor, IOTA Solutions, who helps us manage our surveys. Faculty can log in after their courses are completed and see how students responded to both quantitative and qualitative questions. They can also see how they compare with other courses and whether their scores are above or below the standard deviation for each question we ask. We also cover this data from a large scale at our fall and spring kickoffs where we address the overall trends and look at ways to respond to these trends.
When we first moved to this vendor, I made individual videos for each of the 100 or so courses we offered for each of three terms to go over the results with each instructor to point out some of the significance and to help point out the trends that I was seeing. It was a great way to encourage the solid instructors and to encourage and give some specific examples of how to improve for those who were struggling in their classes. Yes, it took a TON of time, but it’s paid dividends in student learning, improved understanding of pedagogy for instructors, and in relationships that I was able to build with our faculty through the videos and subsequent conversations I had with many of them.
I’m attaching a file that shows some samples of what our profs can see in their report.
Attachments:
1 user thanked author for this post.
-
February 27, 2017 at 1:55 pm #1201
Hi Andrew and thanks for posting! Wow! You are a superhero for making all of those videos! Thanks so much for providing the sample. Is there any way other instructional designers or faculty development specialists might be able to replicate some of these results without making individual videos?
-
March 9, 2017 at 12:16 pm #1234
Here are a couple ideas that we’ve worked on:
We’ve done some general videos about recurring issues (interaction on the discussion boards, grading issues, etc…) and then tell instructors to watch those videos to review them when they have struggles in an area.
We’re working now with Degreed.com to develop specific training resources for faculty to review when they have ongoing struggles. Through Degreed, we’ll be better able to track when faculty actually complete the review materials.
We’ve also addressed several of these key areas in our monthly faculty development webinar over the past 18 months.
I’d LOVE to hear how others are doing this too!
-
March 30, 2017 at 1:29 pm #1462
Thanks, Andrew!
I am definitely going to check out Degreed.com to see what they have in terms of resources for IDs.
Can you provide some details regarding your monthly faculty development webinars? How do you determine what topics of cover?
-
March 23, 2017 at 2:28 pm #1409
After the presentation from the Evaluation experts at UCF, I am curious if anyone has any feedback or thoughts about a statement that was made during the discussion. It was mentioned that it’s best to solicit feedback soon after the presentation / lesson. (For example, we gave feedback immediately after each session during our TOPkit workshop.) With that in mind, are we currently soliciting feedback from students at the most opportune time in our online courses? We typically wait until the end of the semester to gather feedback from students and expect them to be able to reflect on the entire course experience at that point in time. Would it be better and/or do you think it would impact the data if we gathered this feedback at several points during the course? What are the drawbacks / barriers to doing something like this?
2 users thanked author for this post.
-
-
AuthorPosts
- You must be logged in to reply to this topic.