Wednesday, October 29, 2014

Video Streaming Options: A Comparison of Kaltura, V-Brick, and YouTube

Streaming videos via Kaltura, YouTube, V-Brick, etc. rather than putting the video file directly into D2L reduces technical problems for students because these services take in different formats of video and standardize the playback. This reduces problems with downloading, finding the right program to play it back, and browser permission/blocking for all the different types of files. Streaming also helps videos play better on mobile devices.

WMVs (including Desi files - a UWEC thing) are particularly problematic because the free WMV player for Mac is no longer free. However, if you stream WMV files, they will work on a Mac.

Below are descriptions of the main three main options that we recommend for UWEC faculty and staff instructional uses.

Kaltura: new for UWEC in fall 2014 
  • Integrated with D2L – no external sites or logins.
  • Videos "live" on Kaltura's server, rather than in your D2L course. 
  • 2 GB limit per video - that's a lot! Use a wired connection if you have a big file – can take a while. 
  • No quota, meaning no limit how how much video you can have.
  • Offers webcam recording and screencasting, but it's not reliable yet. 
  • Students can upload videos to Kaltura via Discussions, a nice alternative to YouTube or having them upload a file to the dropbox.  
  • Videos are connected with the account of the instructor who uploaded the video, which may be a challenge if team teaching or sharing videos with other instructors - let's chat if you have this concern. 
  • It's currently in pilot mode - contact me or another LTS/CETL employee for instructions. 
  • A downfall is that videos embedded into News, Discussions, or Quizzes do not copy over to your next course via copy components. Videos in Content do copy over. Hopefully this will be fixed soon. 
V-Brick (the university server)
  • The only option for copyright-protected materials such as movies that the video department streams for people, since it can be restricted to require a login. 
  • Videos can be set to public if there are no copyright concerns. 
  • A downfall is that instructors currently cannot upload their own videos, so they need to plan a few days ahead. 
  • The site can be a little slow to load. 
  • If you're looking for a movie, login and search to see what's out there: (only available to UWEC staff/faculty/students).
  • Still a good option, especially if you already use it. 
  • It's easy to share videos outside of D2L or take with you if you get another job (unlike the other options). 
  • Many programs like Camtasia export directly to YouTube, saving a step. 
  • Webcam recording works reliably, unlike Kaltura ( - that's "my underscore (_) webcam" after the url for YouTube  
  • Should not be used for videos of students/clients or copyright-protected material. 

Thursday, October 23, 2014

Recording Audio into PPT on a Mac...

PowerPoint 2011 for Mac has long been a thorn in my side.  If you're looking to add audio to a slide deck and publish into a movie, it doesn't even come close to doing what PowerPoint 2013 for Windows does. Actually, it doesn't even work. You can record audio into it and set timings, but when you save as a movie, the audio just shows up as an image of a speaker and doesn't play. A brief Google search found that this is a pervasive problem - it's not just me.  There are rumors of Office 2014 for Mac, but I have found no confirmed release date. There's no guarantee this will be fixed, anyway.

So, I decided to look into Keynote instead and I was really excited because I thought this was the answer to all of my problems but it turns out that Keynote records one big audio file for the whole thing, not separate files for each slide. Darn it. Never mind. What people really like about recording audio into PowerPoint 2013 is being able to easily change a slide here and there without redoing the whole thing. A screencast accomplishes about the same as Keynote. (I was recently reminded that Quicktime on a Mac is also a basic screencasting tool that requires no download - cool!)

My answer to Mac users who want to record audio into PowerPoint for years has been "do you have access to a Windows computer?" It seems that recording audio on the Mac and then exporting on Windows in PPT 2013 retains the audio and timings. I haven't done it much to feel very confident with this fix, but it did just work for me now.  If someone wants to do this, I would test just one slide on the two computers the person intends to use, just in case.

Another option is to use a program like VMWare or Parallels to run Windows on a Mac. Back at my old job, I used Parallels and it wasn't great, but that was almost four years ago.  I recently got VMWare Fusion and I love it!  It is so convenient to have Windows easily accessible on my Mac. I just recorded audio into PowerPoint 2013 on Windows via VMWare and it did work. Windows seemed a little confused about my USB mic and didn't name it properly in the control panel, but it did use the correct mic and I got it to sound great.

I suppose the easiest thing to do as a Mac user who wants the ability to re-record a slide here and there is to add the audio in PPT 2011 and just tell the students how to play it right out of PPT.  That's not very elegant though and completely not mobile friendly.  I would, minimally, try exporting it on a Windows computer and then put it on YouTube or Kaltura, and then just give the students the PPT file if something goes awry in that process.

How to Set a Transparent Background on an Image in PowerPoint 2013

PowerPoint is a surprisingly helpful image editing tool. One thing you can do very easily is remove the background of an image. I actually like a white slide background and one reason I advocated for it is because many images have a white background, so they just blend in to the slide like this:

That takes no work. However, what if you want a colored background? You get something like this:

Yuck. Thankfully, PowerPoint 2013 makes it easy to set that background as transparent. First, click on the image. When it is selected, Picture Tools will be available on the toolbar at the top. Then choose Color and Set Transparent Color. 

Sorry for the bad annotation - I wish the snipping tool had built in arrows and shapes. 
Your cursor will then turn into a color picker. Click on the color you want to be transparent - in my case, the white background of the map - and voila!

Pretty cool, huh? Much easier than Photoshop. 

Tuesday, October 21, 2014

Introducing Students to Turnitin's Originality Check as a Learning Tool

Turnitin is kind of an odd learning technology for me because I have not used it as an instructor but I have been subjected to it as a student. The university I attend requires instructors to submit at least one paper by each student in each class to Turnitin. The papers I've written have ranged from 0-40% unoriginal. The 40% result alarmed me but it was actually ok because my instructor submitted my whole paper, including the standard cover page that all students use, rather than just copying out the text I wrote.  Add in my references being found unoriginal, a few quotes, and a couple of small coincidental matches, and it looks kind of bad although no plagiarism was occurring. When I asked my instructor if this was ok, he basically said "sure, don't worry about it."  Huh?

It doesn't have to be like this! The writing instructor who does the Turnitin best practices webinars uses Turnitin as a learning tool. She starts out the semester letting her students submit drafts and revise based on the originality and grammar report. By the end of the semester, she expects the students to have learned from their experience and weans them off of submitting drafts to Turnitin and revising. 

I have also heard of instructors who will tell students to write a bad paper and plagiarize away or paraphrase closely to see what happens when it's submitted to Turnitin. Let's learn how this thing works, because understanding it can sort of trick people into writing and citing better. (If you use Turnitin this way, you can set it to not submit these papers to the Turnitin repository.) 

I would recommend for instructors to try it out themselves as well. I thought it was fascinating to submit some of my own papers when I got access to Turnitin through work. We can create fake student accounts so instructors can get the full experience of submitting and getting the report. I recommend not saving these to the Turnitin repository.

If you want to give the students the benefit of the doubt, you can explain Turnitin as a self-check to ensure there has been no unintentional plagiarism. The writing experts at UWEC say that plagiarism really isn't that prevalent and most plagiarism is unintentional due to poor citing or paraphrasing too closely. Those students can be helped! 

Youmans (2011) found the threat of Turnitin didn't eliminate plagiarism in his classes - 3 students still clearly plagiarized despite being told Turnitin would be used. He hypothesized that these students were desperate at the last minute and hoped their plagiarism wouldn't be found. My hypothesis is that if the students understood Turnitin better, they might have been less confident they'd get away with it. Also, the students who had unintentionally plagiarized might have learned from Turnitin if they were allowed to submit a draft. 

Here's how to set up a dropbox so students can re-submit and see their results:
  • Go into "Edit Other Options in a New Window" at the bottom of the D2L dropbox properties page.
  • Click on "Optional Settings."
  • Where it says "Generate Originality Reports for student submissions" choose "immediately (can overwrite originality reports until due date)".
  • Ensure that students are able to see the reports.
My opinion is that, minimally, instructors should allow students to see their originality reports, even if they don't allow students to re-submit.  This gives students the opportunity to learn from the service and avoids them feeling as if their paper is being used for something without their involvement. 

So, I'm making some pretty positive assumptions about students' intentions and the capabilities of Turnitin in this post. I want to remind you that Turnitin can't always find every instance of matching text and it does not replace an instructor's intuition or old fashioned ways of identifying plagiarism. It's still important to use unique assignment prompts, collect multiple samples of writing for comparison, and break up a big paper into smaller assignments when possible. My next post will explore the loopholes, ways students can cheat Turnitin, and downfalls. Hopefully I don't need to come back and revise this post after fully researching that! 

Youmans, R. (2011). Does the adoption of plagiarism detection software in higher education reduce plagiarism? Studies in Higher education, 36(7), 749-761.

Wednesday, October 1, 2014

Accessibility/Universal Design at UWEC

Since my previous career was working with people who have disabilities, accessibility isn't far out of my mind. While I've been at UWEC, online courses and instructional videos have increased and it is important to consider accessibility to ensure everyone can access the content.  This blog post contains our interpretation of the law at UWEC and what my team has been doing to move toward proactive accessibility. I now have two students who work on video transcription, scanning, and other instructional design tasks for a total of 20 hours per week.  It kind of feels like we are ants moving tiny pieces of sand, but it's something!

The Law

First and foremost, Section 504 of the Rehabilitation Act and Title II of the ADA mandates that accommodations must be provided to students with documented disabilities who request them to create equal access. It is not legally mandated that everything in a course needs to be proactively accessible, but some programs or universities may require it as a business practice.

It is ideal that materials are accessible proactively since setting up accommodations can be time consuming and accommodations for students with disabilities often benefit students who do not have disabilities too.  For instance, providing a transcript allows students who are deaf to read what is in a video and students who just prefer to read rather than watch (like me) can do so too.  Students who speak English as a second language can follow along better by reading and hearing at the same time. That's called universal design.

In addition, students are not required to disclose a disability in college, so they may not disclose until there is a problem; no problems means they don't have to be singled out. Wouldn't that be great?


Our main focus is on providing a text alternative for videos with audio information.  This is our priority because it has the potential to impact students who do not have disabilities as well, as I mentioned.  Instructors have reported back that their students have appreciated transcripts.

I first find courses/instructors/videos who seem to be good candidates. There are actually not many; I have to actively seek them out. Here are the criteria for good videos to caption/transcribe, but if an instructor feels passionately about it, I would certainly have the students work on their class.
  • Content is stable - videos will be used again.
  • Courses are at least average in size (30+ students - not regularly 10 or so).  Ideally, they are large or the course is offered frequently so that the video is viewed by many students.
  • Online courses are prioritized since literature has indicated that students with disabilities are more likely to take online courses.
  • The instructor is willing to work with me to provide this accommodation.  Usually I can just pop the transcripts into their D2L course myself, but sometimes if they need to be captioned I will have to meet with the instructor and get the transcript into YouTube. 
Here's how we do it. My students get the videos in a variety of ways: YouTube playlists, URLs, or I give them access to the content page in D2L if approved.  They usually watch the videos via VLC player because it allows them to slow down the speed so they can just type rather than listen, pause, rewind, etc.  My first student typed about 95 words per minute, so he could keep up pretty well if the video was set to 60%.  If you copy the URL, you can paste from the clipboard in VLC to open it.  

Transcript or Captions? 
When the audio/text would lose meaning without seeing what is on the screen, we caption it. This often happens in math when they write out problems while speaking.  Captions usually need to be word-for-word, with the ums, ahs, and misspeaking.  This is because the timing of the captions will be off if too much is missing.  

We try to use YouTube for captions because it will automatically sync the timing if you upload a text file, which is a big time saver.  (If you want to be entertained, click on the automatic captions that YouTube creates - wow, bad. But it usually works great to upload your own text and let it do the timing.).  The alternative is to use a program that has you click to time the captions to come up manually. Pretty time consuming. You can also let YouTube take a stab at the captions, download their file, and then edit it and re-upload.  

In the majority of cases, it is possible to include a screenshot with the text so the file stands alone and the students can read it without watching the video at all.  We clean up transcripts so that they read well.  It is easier to just provide a text transcript because it's just done when the student is done typing and reviewing.  I sometimes suggest for them to break it up into more paragraphs or add headings and if logical, they will add screenshots.  For instance, voice over power points are always text files with screenshots.  We'll make that into a PDF and then reduce the size if there were a lot of images.  

How long does it take and what does it cost? 
The first student I employed estimated that it took 10 minutes to transcribe each minute of a video if he counted file moving, reviewing it, etc.  A colleague at another university said they found paying a captioning company is about the same as paying a student, which may be the case but students are the best option for me because 1) they may have work study, which then costs my department nothing and 2) I can justify hiring students because that is a normal thing we do, whereas paying a captioning company is difficult because it is a whole different way of paying.

Captions vs Subtitles 
Just as a FYI, captions and subtitles are technically different: captions include things like noises ("door slams", etc) while subtitles are meant for different languages and don't indicate noises. If accessibility is your goal, captions should be provided. I really haven't run into this being a problem because instructional videos don't often include noises like movies do.

What Else? 

That was a lot just about transcription and captioning.  I'm going to stop this post here and then create another on vision disabilities primarily that I will link to here when it's ready. Please comment if you have other ideas or questions!

Friday, September 5, 2014

Embedding Twitter into a D2L Widget

A professor asked me about embedding a Twitter feed into a D2L widget on the course home and after googling it for help we both ran into the same problem - it was just showing a link to open Twitter rather than actually embedding it.

The solution was that the Twitter code was not including a "data widget ID" when you first create a new widget, but this ID is in the Url, so you can copy and paste it out of there.

The code should look like what I have below, but with the parts in red replaced with the twitter handle or search you want and the appropriate data widget ID. 

<a class="twitter-timeline" href="" data-widget-id="507604658731220994">Tweets by @edutopia</a>
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);;js.src=p+"://";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>

Now here's the really weird thing.  When I went into Twitter today to write this post, the data widget ID was there! What the heck!

I made a video showing this:

And if you want info on changing the course home page and creating the widget, we have documentation on this webpage.

NOTE: You can actually tweet right from the widget in D2L.  There is a box at the bottom of the feed that says "Tweet to @lirpapierson" above that would then send me a tweet. If you instead have a hashtag feeding into it, it would say "Tweet #aphasia." The professor I assisted clicked here and it brought up his Twitter account, making him worried that students would also have access to tweet from his account. However, when I clicked there, it brought up my Twitter account. So if the students are logged in, it will be very easy to tweet, and if they are not it should prompt them to login or create an account. It just seems kind of odd at first.

Thursday, July 3, 2014

Learning Analytics Summer Institute (LASI)

This week I attended the Learning Analytics Summer Institute (LASI), which is a hybrid event that involved three half-day streaming sessions from Harvard and two half days of face-to-face sessions in Madison.  There are local events all over the globe that center around the Harvard streams.  If you are completely unfamiliar with Learning Analytics, this information from Educause is a good foundation.  It's been very exciting to learn about Learning Analytics (LA) on the big scale from Harvard and the smaller scale from the UWs - mostly Madison.  Below are a few major general points. I'll also write a post on ethics and one on two specific tools used in the UW system, MAP-Works and the D2L Student Success System (S3).


There are a lot of terms used in this area and I realized that I wasn't 100% sure what they all meant, so here's some info for those of you who may be in the same boat:

Analytics: "the use of data, statistical analysis, and explanatory and predictive models to gain insights and act on complex issues." (Educause, 2012, p. 1).

Learning Analytics (LA): "a genre of analytics that entails the collection and analysis of data about learners" (Educause, 2012, p. 1). LA can consist solely of data generated by learners as they work on a course, or it can be supplemented by information about the learner like demographics, previous course work, high school information, standardized test scores, self-report data, etc.

Predictive Learning Analytics: obviously, LA that helps predict things like student success/risk, course recommendations, paths through courses, etc.

Educational Data Mining (EDM): "EDM develops methods and applies techniques from statistics, machine learning, and data mining to analyze data collected during teaching and learning. EDM tests learning theories and informs educational practice" (US Dept of Ed, 2012, p. 9).

EDM vs LA: "Learning analytics draws on a broader array of academic disciplines than educational data mining, incorporating concepts and techniques from information science and sociology, in addition to computer science, statistics, psychology, and the learning sciences. Unlike educational data mining, learning analytics generally does not emphasize reducing learning into components but instead seeks to understand entire systems and to support human decision making." 

Basically, my interpretation is the EDM is more granular and LA is bigger picture. 

Big Data: Well, obviously big data refers to lots and lots of data - the type of data that Amazon has, for instance. What we are working with in D2L is "little data."

Open Learning Analytics: Similarly to open source software, open LA provides access to source code, algorithms, and whatever other back end info is there (I have no idea).  The opposite would be proprietary or commercial systems, like D2L's or MAP-Works.  The presenters on the D2L tool said that they don't know exactly how the D2L tool comes to the conclusions it does - they set up a model, it's fed data, and it spits out judgments. Not open.

What Learning Analytics is not: One or a few "surfacey" measures, like test statistics or number of logins to the LMS.  Although that information can be useful, it's not big picture enough to really be considered LA because it's not predicting anything in comparison to anything else nor is it combining data to look at a bigger picture.

Learning analytics is a legitimate, emerging field on its own

Maybe that's an odd thing to say, but I didn't realize that LA was so big. I guess I thought it was a part of educational technology, but it's more aligned with computer science and data analytics.  Educational technologists would probably be the intermediaries to get a system set up and support it to get the information to faculty who provide the interventions based on the conclusions of the data.  Either a proprietary/commercial LA system or a LA professional (ideally, both) would be needed in addition to an educational technologist.

LASI was a big conference for the field of LA.  There was a focus from the Harvard sessions on the field in general, other professional development opportunities, and journals/publications.  A big part of it seemed to be building a community of LA professionals. I did not feel like the intended audience for the Harvard sessions (I got kind of an awareness-level of absorption - "oh, that's a thing?") but I learned a lot from the Madison sessions and it was interesting to get a peek at the more hardcore aspects of the field.

Learning Analytics Professionals

There was an entire session on LA professionals, many of whom work in private industry (rather than academia) making excellent salaries.  The owner of Structure (Canvas) described the main skills he looks for in a learning analyst:
  • Project management
  • Agile/rapid prototyping (create something quickly to start playing with it - also a desired skill of an instructional designer)
  • Communication 
  • Education theory (varying degrees of specialization depending on the specific application)
  • Statistics
  • Data retrieval (SQL, CSV, JSON)
  • Rudimentary/functional scripting
  • Visualization ("Make it pretty")
  • Data storage & management
  • Knowledge management ("How did we do that thing we did?") 
There was another Harvard session in which 30 doctoral students in LA introduced themselves and some shared their research, very briefly.  I think it's interesting that there are at least 30 doctoral students doing research in this field.

Emerging Field

So the emerging part is important. There's not a lot of info or structure to data governance, for instance.  There is a LA pilot happening in the UW-System with D2L that has been pretty rocky.  A master's program specifically in LA is in the works (Penn State, I think?).  A journal just started this year.

Kimberly Arnold, UW-System LA guru, shared the Gartner Hype Cycle which is just fascinating. Here's the one from 2013. Click to make it bigger if you can't read this.

Big data is at the peak of inflated expectations, while predictive analytics is predicted to reach it's plateau of productivity in less than two years.  Wikipedia tells me that the plateau of productivity means that "mainstream adoption starts to take off." Starts to take off. I'd agree with that it's starting to take off.

Then she shared the diffusion of innovation figure. You've probably seen this before:

She said that people ''in the know" about LA indicate believe that LA is in the innovators to early-early majority area, which might be a stretch.  It's new - we're just figuring this out.  So the thing to think about here is what kind of school are you in - an innovator, early adopter, early majority, late majority, or laggard?  Is the culture welcoming of bleeding edge technologies like this?  Probably more importantly, are there people who will do something with the data that's collected?  Do you want to spend time figuring things out, or latch on once the bugs have been worked out?  Oh, there are some up I'll share what I learned about the D2L Student Success Pilot.  To be continued!


Educause. (2012). Learning Analytics: A Report on the ELI Focus Session.

US Department of Education. (2012). Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics.