Hello,
This will be my last blogging post here. My blog has now moved to:
http://blog.edcetratraining.com/
This month I'm focusing in on providing neat little tips and tricks for working with structured content and delivering high end eLearning courses.
Wednesday, September 8, 2010
Friday, July 23, 2010
Why eLearning Should Care About Adobe versus Apple
Two technology giants have exchanged blows over the last little while, in attempts to knock the other one out in the public’s eyes. Apple and Adobe are at odds over the adoption of the Flash platform on iPhones, iPads and iPods. The battle in fact reinforces the public’s strong interest and desire to have the two giants work together, but alas there seems to be no reconciliation in the near future.
In a letter Steve Jobs, CEO of Apple posted on the apple website, he writes:
Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.
The avalanche of media outlets offering their content for Apple’s mobile devices demonstrates that Flash is no longer necessary to watch video or consume any kind of web content. And the 200,000 apps on Apple’s App Store proves that Flash isn’t necessary for tens of thousands of developers to create graphically rich applications, including games. (http://www.apple.com/hotnews/thoughts-on-flash/)
One really interested party in reconciliation between the two giants is the eLearning industry who relies heavily on Flash to deliver what is commonly known as ‘interactive content’, videos and even just plain text. Moreover, the recent fascination with mLearning has the industry wanting a ubiquitous mobile platform that includes Flash capabilities.
Almost all eLearning industry accepted tools have Flash outputs and most have an mLearning component or are looking into it. The gross majority of organizations who develop their eLearning internally (which I would think form the largest producers of eLearning) have all armed themselves with the ‘do –it-yourself’ tool and handed it off to their non-technical resources who inevitably become the ‘developers’. As online learning is going mobile questions about how to build mobile learning, how do you deploy mobile learning are surfacing en masse. Almost all conversations based on these questions turn to the ugly truth that unfortunately iPhones and iPads don’t run Flash and so the ‘interactive bag-o-tricks’ have to be left at the door for large scale mLearning deployments. Instead folks are asked to consider creative thinking about how best to use the technology to make it larger than what it is (and there have been plenty of great ideas).
In the letter Jobs writes on the Apple site, there is one section that really caught my eye and sparked the motivation to write. He says:
Adobe’s Flash products are 100% proprietary. They are only available from Adobe, and Adobe has sole authority as to their future enhancement, pricing, etc. While Adobe’s Flash products are widely available, this does not mean they are open, since they are controlled entirely by Adobe and available only from Adobe. By almost any definition, Flash is a closed system. (http://www.apple.com/hotnews/thoughts-on-flash/)
Ever hear the expression ‘Here today, gone tomorrow’? With the rising white noise of HTML5 on the horizon are we as confident as we used to be that Flash is going no where? Talking about Apple, how many iMacs did we see in the airport five years ago perched on the business traveler’s lap? How many do we see today? Personally, I see almost as many iMacs as I do PCs. All that to say that technology changes, and it can change quickly or it can change very slowly. The scariness of what Jobs says is the collective sound of the bottom dropping out of corporate America’s internal training programs when they become locked out from use as new platforms take over.
Although this specific battle between Apple and Adobe isn’t at the heart of the issue (it’s the proprietary nature of the tools we use in eLearning) the battle should be a rallying call to everyone who has invested in technology enabled training. When I first started in this field, the big concern for corporate America was the collective retirement of baby boomers and the skills and knowledge that they will be taking with them as they go. How ever would corporate America capture that knowledge and then use it to train the up and coming generation of workers? Funny enough, the answer seems to have been ‘lock the knowledge into a proprietary format and distribute it using proprietary tools’. Funny…because what happens when the tools disappear?
Sure, it won’t happen tomorrow, or will it? A couple of years ago, a fairly large educational institution was given its new invoice for licensing a proprietary LMS for the next few years. Startled at the price increase the institution released a request for proposal seeking a new LMS at a more affordable price. As part of the requirements listed in the RFP, the new vendor would be responsible for converting legacy content in the old LMS, into the new platform. Guess what? Although the cost of a different LMS was cheaper, the cost to convert content to the new platform far exceeded the new licensing costs of the old LMS. Forced with a fiscally painful decision where there was a no win, the least painful route was sticking with the old system. Now I’m a believer in standards and support the SCORM initiative, however, every system has its nuances of implementing the standards and when you have thousands of courses and modules, the task of changing even one line of code to accommodate a system’s nuance results in a rather large undertaking.
The solution for me is not necessarily to implement an open source LMS system although, given the right situation this may help solve some problems for some people. Instead the solution is wrapping our content in open source formats. The paradigm shift, workflow shifts and business modeling shifts have to date interfered in corporations taking a good look at this (since it requires actual expertise rather than putting a new hat on an unqualified resource) but the feud between Apple and Adobe should at the very least make us reconsider.
The notion of wrapping content into open source formats may seem foreign and truth be told I’m using this phrasing in a very specific way. By ‘open source formats’ I’m driving at a vision of multiple software platforms (LMS’s, LCMS’s, authoring tols, etc) being able to process the same content regardless of how the platform was built itself. Processing content refers to a machine’s ability to apply a defined set of algorithms to ingest the content and then spit it out in a specified format (specified through algorithms). Therefore the only precondition to these platforms processing the content is that they contain algorithms that can understand how the content was wrapped. Sounds complicated, but in truth all systems work in this way. In other words for any system to work, there must be a common language, or common denominator that all components of a system understand. The common denominator allows the components to operate under their own set of rules, yet communicate with one another.
Bryan Chapman, from Brandon Hall in his paper At the Intersection of Learning and Enterprise Content Management, talks about the disconnectedness of training departments from ECM strategies which result in training departments having to recreate content that already exists elsewhere in the organization. That disconnectedness will always make training irrelevant when push comes to shove because they aren’t part of the system.
To be part of the system training departments need to be plugged in. Being plugged in means that content that exists elsewhere in the organization can flow through the training department (it can be processed and turned into training) without it having to be recreated. Building content with tool suites that only speak with themselves, locking training content into Flash, putting it into systems that can only be used for training purposes clearly feeds a disaggregated training department from an organization’s system.
Again turning to the feud between Adobe and Apple there ought to be strong global interest within the eLearning community to see it resolved, lest the marginalization of Flash become a reality. Moreover, there ought to be strong global interest from the eLearning community to protect themselves and their organizations from future feuds through the adoption of ‘open source formats’ for their content. Not only will organizations be protected from feuding giants but will have a basis on which to exchange content and have it flow through the system.
In a letter Steve Jobs, CEO of Apple posted on the apple website, he writes:
Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.
The avalanche of media outlets offering their content for Apple’s mobile devices demonstrates that Flash is no longer necessary to watch video or consume any kind of web content. And the 200,000 apps on Apple’s App Store proves that Flash isn’t necessary for tens of thousands of developers to create graphically rich applications, including games. (http://www.apple.com/hotnews/thoughts-on-flash/)
One really interested party in reconciliation between the two giants is the eLearning industry who relies heavily on Flash to deliver what is commonly known as ‘interactive content’, videos and even just plain text. Moreover, the recent fascination with mLearning has the industry wanting a ubiquitous mobile platform that includes Flash capabilities.
Almost all eLearning industry accepted tools have Flash outputs and most have an mLearning component or are looking into it. The gross majority of organizations who develop their eLearning internally (which I would think form the largest producers of eLearning) have all armed themselves with the ‘do –it-yourself’ tool and handed it off to their non-technical resources who inevitably become the ‘developers’. As online learning is going mobile questions about how to build mobile learning, how do you deploy mobile learning are surfacing en masse. Almost all conversations based on these questions turn to the ugly truth that unfortunately iPhones and iPads don’t run Flash and so the ‘interactive bag-o-tricks’ have to be left at the door for large scale mLearning deployments. Instead folks are asked to consider creative thinking about how best to use the technology to make it larger than what it is (and there have been plenty of great ideas).
In the letter Jobs writes on the Apple site, there is one section that really caught my eye and sparked the motivation to write. He says:
Adobe’s Flash products are 100% proprietary. They are only available from Adobe, and Adobe has sole authority as to their future enhancement, pricing, etc. While Adobe’s Flash products are widely available, this does not mean they are open, since they are controlled entirely by Adobe and available only from Adobe. By almost any definition, Flash is a closed system. (http://www.apple.com/hotnews/thoughts-on-flash/)
Ever hear the expression ‘Here today, gone tomorrow’? With the rising white noise of HTML5 on the horizon are we as confident as we used to be that Flash is going no where? Talking about Apple, how many iMacs did we see in the airport five years ago perched on the business traveler’s lap? How many do we see today? Personally, I see almost as many iMacs as I do PCs. All that to say that technology changes, and it can change quickly or it can change very slowly. The scariness of what Jobs says is the collective sound of the bottom dropping out of corporate America’s internal training programs when they become locked out from use as new platforms take over.
Although this specific battle between Apple and Adobe isn’t at the heart of the issue (it’s the proprietary nature of the tools we use in eLearning) the battle should be a rallying call to everyone who has invested in technology enabled training. When I first started in this field, the big concern for corporate America was the collective retirement of baby boomers and the skills and knowledge that they will be taking with them as they go. How ever would corporate America capture that knowledge and then use it to train the up and coming generation of workers? Funny enough, the answer seems to have been ‘lock the knowledge into a proprietary format and distribute it using proprietary tools’. Funny…because what happens when the tools disappear?
Sure, it won’t happen tomorrow, or will it? A couple of years ago, a fairly large educational institution was given its new invoice for licensing a proprietary LMS for the next few years. Startled at the price increase the institution released a request for proposal seeking a new LMS at a more affordable price. As part of the requirements listed in the RFP, the new vendor would be responsible for converting legacy content in the old LMS, into the new platform. Guess what? Although the cost of a different LMS was cheaper, the cost to convert content to the new platform far exceeded the new licensing costs of the old LMS. Forced with a fiscally painful decision where there was a no win, the least painful route was sticking with the old system. Now I’m a believer in standards and support the SCORM initiative, however, every system has its nuances of implementing the standards and when you have thousands of courses and modules, the task of changing even one line of code to accommodate a system’s nuance results in a rather large undertaking.
The solution for me is not necessarily to implement an open source LMS system although, given the right situation this may help solve some problems for some people. Instead the solution is wrapping our content in open source formats. The paradigm shift, workflow shifts and business modeling shifts have to date interfered in corporations taking a good look at this (since it requires actual expertise rather than putting a new hat on an unqualified resource) but the feud between Apple and Adobe should at the very least make us reconsider.
The notion of wrapping content into open source formats may seem foreign and truth be told I’m using this phrasing in a very specific way. By ‘open source formats’ I’m driving at a vision of multiple software platforms (LMS’s, LCMS’s, authoring tols, etc) being able to process the same content regardless of how the platform was built itself. Processing content refers to a machine’s ability to apply a defined set of algorithms to ingest the content and then spit it out in a specified format (specified through algorithms). Therefore the only precondition to these platforms processing the content is that they contain algorithms that can understand how the content was wrapped. Sounds complicated, but in truth all systems work in this way. In other words for any system to work, there must be a common language, or common denominator that all components of a system understand. The common denominator allows the components to operate under their own set of rules, yet communicate with one another.
Bryan Chapman, from Brandon Hall in his paper At the Intersection of Learning and Enterprise Content Management, talks about the disconnectedness of training departments from ECM strategies which result in training departments having to recreate content that already exists elsewhere in the organization. That disconnectedness will always make training irrelevant when push comes to shove because they aren’t part of the system.
To be part of the system training departments need to be plugged in. Being plugged in means that content that exists elsewhere in the organization can flow through the training department (it can be processed and turned into training) without it having to be recreated. Building content with tool suites that only speak with themselves, locking training content into Flash, putting it into systems that can only be used for training purposes clearly feeds a disaggregated training department from an organization’s system.
Again turning to the feud between Adobe and Apple there ought to be strong global interest within the eLearning community to see it resolved, lest the marginalization of Flash become a reality. Moreover, there ought to be strong global interest from the eLearning community to protect themselves and their organizations from future feuds through the adoption of ‘open source formats’ for their content. Not only will organizations be protected from feuding giants but will have a basis on which to exchange content and have it flow through the system.
Labels:
Adobe,
Apple,
edCetra Training,
eLearning,
eLearning 3.0,
semantic web
Wednesday, June 30, 2010
What is sLML and why use it.
sLML stands for Structured Learning Markup Language. It is an open source XML standard to help learning designers and developers provide meaningful semantic markup to learning or knowledge based content. sLML was designed to support web 3.0 and the notion of performance based learning as opposed to event based learning. sLML provides a rich lexicon of instructionally relevant tags to content data. The sLML schema supports instructional design through the application of terms and concepts from the science of instructional design.
There are two important reasons to use sLML:
1) sLML provides a natural language around the tagging of content that can act as a common technology for processing learning content. What does this mean? In the same way that the Dewey Decimal System acts as a common technology for library sciences, sLML can provide a common framework for different learning applications to process content. ‘Processing content’ in this case refers to a computer’s ability to understand the nature of the learning content, and then distribute it to the appropriate platform, to the appropriate audience, in the appropriate language at the appropriate time. The distribution of content can be print based, computer based, through mobiles or any other distribution channel. Again, if you think about the Dewey Decimal System and how it is used by computers or card catalogs to find and retrieve books, the technology around the Dewey Decimal System is secondary. What makes the technology useful is the Dewey Decimal System itself. In the case of sLML, organizations are free to build their own tools that are relevant for their organization similarly to the different applications built around the Dewy Decimal System.
2) The use of the sLML model is consistent with the evolution of web technology. The web is slowly moving to being the ubiquitous operating system for everybody. More and more people are storing, transferring and using documents and applications directly on the web. As it relates to sLML facilitating the use of the web as a CPU means that computers (not humans) will be able do the actual developing, compiling and distribution of content into eLearning, print, mobile, etc at run time. This is much different than what happens today. Today the process for developing and compiling learning content into packages for distribution is a manual process. People compile content into pages, develop the code for those pages, apply the proper standards into the code so that the content can ‘play’ in the appropriate application, package the content and so on. Having a computer do that work means that content can be consumed in sync with its inclusion into the web. In other words, plug content into the web using a standardized semantic markup language (sLML) and then have your web based processing agent make sense of it, compile it, distribute it, track it and anything else, all at run time.
Final Vision
A standardized semantic markup language based on the science of instructional design for the learning community gives the community a powerful foundation to start ‘feeding the web’ with content that can be understood by machines. That understanding includes who the content is for, when do people need the content, what the subject of the content is, the learning hierarchical status of the content (memorize, explain, apply, etc), the language of the content and more. Once a machine can ‘understand’ the content, machines can ‘process’ the content into many different applications, including mobile delivery, eLearning, performance support tools, print based documents, etc. It also means that anyone using sLML, who has created personalized processing agents can grab any content using sLML and have it processed based on their individual specifications. To be clear, the processing includes building code such as HTML, PDF or flash in real time at the time the content is actually being accessed.
Benefits
Intuitive markup for creating learning content
Drastically reduce development time for print, eLearning, mLearning
Expand content, modify content without having to repackage it into its deliverable
sLML stands for Structured Learning Markup Language. It is an open source XML standard to help learning designers and developers provide meaningful semantic markup to learning or knowledge based content. sLML was designed to support web 3.0 and the notion of performance based learning as opposed to event based learning. sLML provides a rich lexicon of instructionally relevant tags to content data. The sLML schema supports instructional design through the application of terms and concepts from the science of instructional design.
There are two important reasons to use sLML:
1) sLML provides a natural language around the tagging of content that can act as a common technology for processing learning content. What does this mean? In the same way that the Dewey Decimal System acts as a common technology for library sciences, sLML can provide a common framework for different learning applications to process content. ‘Processing content’ in this case refers to a computer’s ability to understand the nature of the learning content, and then distribute it to the appropriate platform, to the appropriate audience, in the appropriate language at the appropriate time. The distribution of content can be print based, computer based, through mobiles or any other distribution channel. Again, if you think about the Dewey Decimal System and how it is used by computers or card catalogs to find and retrieve books, the technology around the Dewey Decimal System is secondary. What makes the technology useful is the Dewey Decimal System itself. In the case of sLML, organizations are free to build their own tools that are relevant for their organization similarly to the different applications built around the Dewy Decimal System.
2) The use of the sLML model is consistent with the evolution of web technology. The web is slowly moving to being the ubiquitous operating system for everybody. More and more people are storing, transferring and using documents and applications directly on the web. As it relates to sLML facilitating the use of the web as a CPU means that computers (not humans) will be able do the actual developing, compiling and distribution of content into eLearning, print, mobile, etc at run time. This is much different than what happens today. Today the process for developing and compiling learning content into packages for distribution is a manual process. People compile content into pages, develop the code for those pages, apply the proper standards into the code so that the content can ‘play’ in the appropriate application, package the content and so on. Having a computer do that work means that content can be consumed in sync with its inclusion into the web. In other words, plug content into the web using a standardized semantic markup language (sLML) and then have your web based processing agent make sense of it, compile it, distribute it, track it and anything else, all at run time.
Final Vision
A standardized semantic markup language based on the science of instructional design for the learning community gives the community a powerful foundation to start ‘feeding the web’ with content that can be understood by machines. That understanding includes who the content is for, when do people need the content, what the subject of the content is, the learning hierarchical status of the content (memorize, explain, apply, etc), the language of the content and more. Once a machine can ‘understand’ the content, machines can ‘process’ the content into many different applications, including mobile delivery, eLearning, performance support tools, print based documents, etc. It also means that anyone using sLML, who has created personalized processing agents can grab any content using sLML and have it processed based on their individual specifications. To be clear, the processing includes building code such as HTML, PDF or flash in real time at the time the content is actually being accessed.
Benefits
Intuitive markup for creating learning content
Drastically reduce development time for print, eLearning, mLearning
Expand content, modify content without having to repackage it into its deliverable
Tuesday, June 8, 2010
How the web wants us to learn
For years now, I've been touting the advantages of structured authoring for learning using Instructional Design as the semantic framework for markup. In so doing, I believe the value of instructional designers will once again be placed in the right spot and the industry as a whole (specifically consumers) would benefit both in dollars and time.
Well....the industry is still moving clumsily along with 'black box' technology, yet the most powerful learning tool in the world has completely changed the way people learn and has also demonstrated what learning could be. I would suggest that the Google search engine has done more for learners and learning than any other million dollar app out there. In fact, I would even suggest that Google has provided a will to the web on how people should learn. People have changed their own expectations and clearly show that they don't want 'event' based training, but rather the tools to give them the answers when they need them. So the web wants us to learn through performance support based paradigms (that may include full courses) and we want to learn that way also.
So why does our industry ignore what's happening and continue to work on technology that the web doesn't understand. Structured authoring for learning is the process that supports learning technologies that are understood by the web. What the web understands, the web can process. What the web can process results in pinpoint information when we need it the most. What platform is more ubiquitous than the web? Why not use it.
Subscribe to:
Posts (Atom)