Hello,
This will be my last blogging post here. My blog has now moved to:
http://blog.edcetratraining.com/
This month I'm focusing in on providing neat little tips and tricks for working with structured content and delivering high end eLearning courses.
Wednesday, September 8, 2010
Friday, July 23, 2010
Why eLearning Should Care About Adobe versus Apple
Two technology giants have exchanged blows over the last little while, in attempts to knock the other one out in the public’s eyes. Apple and Adobe are at odds over the adoption of the Flash platform on iPhones, iPads and iPods. The battle in fact reinforces the public’s strong interest and desire to have the two giants work together, but alas there seems to be no reconciliation in the near future.
In a letter Steve Jobs, CEO of Apple posted on the apple website, he writes:
Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.
The avalanche of media outlets offering their content for Apple’s mobile devices demonstrates that Flash is no longer necessary to watch video or consume any kind of web content. And the 200,000 apps on Apple’s App Store proves that Flash isn’t necessary for tens of thousands of developers to create graphically rich applications, including games. (http://www.apple.com/hotnews/thoughts-on-flash/)
One really interested party in reconciliation between the two giants is the eLearning industry who relies heavily on Flash to deliver what is commonly known as ‘interactive content’, videos and even just plain text. Moreover, the recent fascination with mLearning has the industry wanting a ubiquitous mobile platform that includes Flash capabilities.
Almost all eLearning industry accepted tools have Flash outputs and most have an mLearning component or are looking into it. The gross majority of organizations who develop their eLearning internally (which I would think form the largest producers of eLearning) have all armed themselves with the ‘do –it-yourself’ tool and handed it off to their non-technical resources who inevitably become the ‘developers’. As online learning is going mobile questions about how to build mobile learning, how do you deploy mobile learning are surfacing en masse. Almost all conversations based on these questions turn to the ugly truth that unfortunately iPhones and iPads don’t run Flash and so the ‘interactive bag-o-tricks’ have to be left at the door for large scale mLearning deployments. Instead folks are asked to consider creative thinking about how best to use the technology to make it larger than what it is (and there have been plenty of great ideas).
In the letter Jobs writes on the Apple site, there is one section that really caught my eye and sparked the motivation to write. He says:
Adobe’s Flash products are 100% proprietary. They are only available from Adobe, and Adobe has sole authority as to their future enhancement, pricing, etc. While Adobe’s Flash products are widely available, this does not mean they are open, since they are controlled entirely by Adobe and available only from Adobe. By almost any definition, Flash is a closed system. (http://www.apple.com/hotnews/thoughts-on-flash/)
Ever hear the expression ‘Here today, gone tomorrow’? With the rising white noise of HTML5 on the horizon are we as confident as we used to be that Flash is going no where? Talking about Apple, how many iMacs did we see in the airport five years ago perched on the business traveler’s lap? How many do we see today? Personally, I see almost as many iMacs as I do PCs. All that to say that technology changes, and it can change quickly or it can change very slowly. The scariness of what Jobs says is the collective sound of the bottom dropping out of corporate America’s internal training programs when they become locked out from use as new platforms take over.
Although this specific battle between Apple and Adobe isn’t at the heart of the issue (it’s the proprietary nature of the tools we use in eLearning) the battle should be a rallying call to everyone who has invested in technology enabled training. When I first started in this field, the big concern for corporate America was the collective retirement of baby boomers and the skills and knowledge that they will be taking with them as they go. How ever would corporate America capture that knowledge and then use it to train the up and coming generation of workers? Funny enough, the answer seems to have been ‘lock the knowledge into a proprietary format and distribute it using proprietary tools’. Funny…because what happens when the tools disappear?
Sure, it won’t happen tomorrow, or will it? A couple of years ago, a fairly large educational institution was given its new invoice for licensing a proprietary LMS for the next few years. Startled at the price increase the institution released a request for proposal seeking a new LMS at a more affordable price. As part of the requirements listed in the RFP, the new vendor would be responsible for converting legacy content in the old LMS, into the new platform. Guess what? Although the cost of a different LMS was cheaper, the cost to convert content to the new platform far exceeded the new licensing costs of the old LMS. Forced with a fiscally painful decision where there was a no win, the least painful route was sticking with the old system. Now I’m a believer in standards and support the SCORM initiative, however, every system has its nuances of implementing the standards and when you have thousands of courses and modules, the task of changing even one line of code to accommodate a system’s nuance results in a rather large undertaking.
The solution for me is not necessarily to implement an open source LMS system although, given the right situation this may help solve some problems for some people. Instead the solution is wrapping our content in open source formats. The paradigm shift, workflow shifts and business modeling shifts have to date interfered in corporations taking a good look at this (since it requires actual expertise rather than putting a new hat on an unqualified resource) but the feud between Apple and Adobe should at the very least make us reconsider.
The notion of wrapping content into open source formats may seem foreign and truth be told I’m using this phrasing in a very specific way. By ‘open source formats’ I’m driving at a vision of multiple software platforms (LMS’s, LCMS’s, authoring tols, etc) being able to process the same content regardless of how the platform was built itself. Processing content refers to a machine’s ability to apply a defined set of algorithms to ingest the content and then spit it out in a specified format (specified through algorithms). Therefore the only precondition to these platforms processing the content is that they contain algorithms that can understand how the content was wrapped. Sounds complicated, but in truth all systems work in this way. In other words for any system to work, there must be a common language, or common denominator that all components of a system understand. The common denominator allows the components to operate under their own set of rules, yet communicate with one another.
Bryan Chapman, from Brandon Hall in his paper At the Intersection of Learning and Enterprise Content Management, talks about the disconnectedness of training departments from ECM strategies which result in training departments having to recreate content that already exists elsewhere in the organization. That disconnectedness will always make training irrelevant when push comes to shove because they aren’t part of the system.
To be part of the system training departments need to be plugged in. Being plugged in means that content that exists elsewhere in the organization can flow through the training department (it can be processed and turned into training) without it having to be recreated. Building content with tool suites that only speak with themselves, locking training content into Flash, putting it into systems that can only be used for training purposes clearly feeds a disaggregated training department from an organization’s system.
Again turning to the feud between Adobe and Apple there ought to be strong global interest within the eLearning community to see it resolved, lest the marginalization of Flash become a reality. Moreover, there ought to be strong global interest from the eLearning community to protect themselves and their organizations from future feuds through the adoption of ‘open source formats’ for their content. Not only will organizations be protected from feuding giants but will have a basis on which to exchange content and have it flow through the system.
In a letter Steve Jobs, CEO of Apple posted on the apple website, he writes:
Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.
The avalanche of media outlets offering their content for Apple’s mobile devices demonstrates that Flash is no longer necessary to watch video or consume any kind of web content. And the 200,000 apps on Apple’s App Store proves that Flash isn’t necessary for tens of thousands of developers to create graphically rich applications, including games. (http://www.apple.com/hotnews/thoughts-on-flash/)
One really interested party in reconciliation between the two giants is the eLearning industry who relies heavily on Flash to deliver what is commonly known as ‘interactive content’, videos and even just plain text. Moreover, the recent fascination with mLearning has the industry wanting a ubiquitous mobile platform that includes Flash capabilities.
Almost all eLearning industry accepted tools have Flash outputs and most have an mLearning component or are looking into it. The gross majority of organizations who develop their eLearning internally (which I would think form the largest producers of eLearning) have all armed themselves with the ‘do –it-yourself’ tool and handed it off to their non-technical resources who inevitably become the ‘developers’. As online learning is going mobile questions about how to build mobile learning, how do you deploy mobile learning are surfacing en masse. Almost all conversations based on these questions turn to the ugly truth that unfortunately iPhones and iPads don’t run Flash and so the ‘interactive bag-o-tricks’ have to be left at the door for large scale mLearning deployments. Instead folks are asked to consider creative thinking about how best to use the technology to make it larger than what it is (and there have been plenty of great ideas).
In the letter Jobs writes on the Apple site, there is one section that really caught my eye and sparked the motivation to write. He says:
Adobe’s Flash products are 100% proprietary. They are only available from Adobe, and Adobe has sole authority as to their future enhancement, pricing, etc. While Adobe’s Flash products are widely available, this does not mean they are open, since they are controlled entirely by Adobe and available only from Adobe. By almost any definition, Flash is a closed system. (http://www.apple.com/hotnews/thoughts-on-flash/)
Ever hear the expression ‘Here today, gone tomorrow’? With the rising white noise of HTML5 on the horizon are we as confident as we used to be that Flash is going no where? Talking about Apple, how many iMacs did we see in the airport five years ago perched on the business traveler’s lap? How many do we see today? Personally, I see almost as many iMacs as I do PCs. All that to say that technology changes, and it can change quickly or it can change very slowly. The scariness of what Jobs says is the collective sound of the bottom dropping out of corporate America’s internal training programs when they become locked out from use as new platforms take over.
Although this specific battle between Apple and Adobe isn’t at the heart of the issue (it’s the proprietary nature of the tools we use in eLearning) the battle should be a rallying call to everyone who has invested in technology enabled training. When I first started in this field, the big concern for corporate America was the collective retirement of baby boomers and the skills and knowledge that they will be taking with them as they go. How ever would corporate America capture that knowledge and then use it to train the up and coming generation of workers? Funny enough, the answer seems to have been ‘lock the knowledge into a proprietary format and distribute it using proprietary tools’. Funny…because what happens when the tools disappear?
Sure, it won’t happen tomorrow, or will it? A couple of years ago, a fairly large educational institution was given its new invoice for licensing a proprietary LMS for the next few years. Startled at the price increase the institution released a request for proposal seeking a new LMS at a more affordable price. As part of the requirements listed in the RFP, the new vendor would be responsible for converting legacy content in the old LMS, into the new platform. Guess what? Although the cost of a different LMS was cheaper, the cost to convert content to the new platform far exceeded the new licensing costs of the old LMS. Forced with a fiscally painful decision where there was a no win, the least painful route was sticking with the old system. Now I’m a believer in standards and support the SCORM initiative, however, every system has its nuances of implementing the standards and when you have thousands of courses and modules, the task of changing even one line of code to accommodate a system’s nuance results in a rather large undertaking.
The solution for me is not necessarily to implement an open source LMS system although, given the right situation this may help solve some problems for some people. Instead the solution is wrapping our content in open source formats. The paradigm shift, workflow shifts and business modeling shifts have to date interfered in corporations taking a good look at this (since it requires actual expertise rather than putting a new hat on an unqualified resource) but the feud between Apple and Adobe should at the very least make us reconsider.
The notion of wrapping content into open source formats may seem foreign and truth be told I’m using this phrasing in a very specific way. By ‘open source formats’ I’m driving at a vision of multiple software platforms (LMS’s, LCMS’s, authoring tols, etc) being able to process the same content regardless of how the platform was built itself. Processing content refers to a machine’s ability to apply a defined set of algorithms to ingest the content and then spit it out in a specified format (specified through algorithms). Therefore the only precondition to these platforms processing the content is that they contain algorithms that can understand how the content was wrapped. Sounds complicated, but in truth all systems work in this way. In other words for any system to work, there must be a common language, or common denominator that all components of a system understand. The common denominator allows the components to operate under their own set of rules, yet communicate with one another.
Bryan Chapman, from Brandon Hall in his paper At the Intersection of Learning and Enterprise Content Management, talks about the disconnectedness of training departments from ECM strategies which result in training departments having to recreate content that already exists elsewhere in the organization. That disconnectedness will always make training irrelevant when push comes to shove because they aren’t part of the system.
To be part of the system training departments need to be plugged in. Being plugged in means that content that exists elsewhere in the organization can flow through the training department (it can be processed and turned into training) without it having to be recreated. Building content with tool suites that only speak with themselves, locking training content into Flash, putting it into systems that can only be used for training purposes clearly feeds a disaggregated training department from an organization’s system.
Again turning to the feud between Adobe and Apple there ought to be strong global interest within the eLearning community to see it resolved, lest the marginalization of Flash become a reality. Moreover, there ought to be strong global interest from the eLearning community to protect themselves and their organizations from future feuds through the adoption of ‘open source formats’ for their content. Not only will organizations be protected from feuding giants but will have a basis on which to exchange content and have it flow through the system.
Labels:
Adobe,
Apple,
edCetra Training,
eLearning,
eLearning 3.0,
semantic web
Wednesday, June 30, 2010
What is sLML and why use it.
sLML stands for Structured Learning Markup Language. It is an open source XML standard to help learning designers and developers provide meaningful semantic markup to learning or knowledge based content. sLML was designed to support web 3.0 and the notion of performance based learning as opposed to event based learning. sLML provides a rich lexicon of instructionally relevant tags to content data. The sLML schema supports instructional design through the application of terms and concepts from the science of instructional design.
There are two important reasons to use sLML:
1) sLML provides a natural language around the tagging of content that can act as a common technology for processing learning content. What does this mean? In the same way that the Dewey Decimal System acts as a common technology for library sciences, sLML can provide a common framework for different learning applications to process content. ‘Processing content’ in this case refers to a computer’s ability to understand the nature of the learning content, and then distribute it to the appropriate platform, to the appropriate audience, in the appropriate language at the appropriate time. The distribution of content can be print based, computer based, through mobiles or any other distribution channel. Again, if you think about the Dewey Decimal System and how it is used by computers or card catalogs to find and retrieve books, the technology around the Dewey Decimal System is secondary. What makes the technology useful is the Dewey Decimal System itself. In the case of sLML, organizations are free to build their own tools that are relevant for their organization similarly to the different applications built around the Dewy Decimal System.
2) The use of the sLML model is consistent with the evolution of web technology. The web is slowly moving to being the ubiquitous operating system for everybody. More and more people are storing, transferring and using documents and applications directly on the web. As it relates to sLML facilitating the use of the web as a CPU means that computers (not humans) will be able do the actual developing, compiling and distribution of content into eLearning, print, mobile, etc at run time. This is much different than what happens today. Today the process for developing and compiling learning content into packages for distribution is a manual process. People compile content into pages, develop the code for those pages, apply the proper standards into the code so that the content can ‘play’ in the appropriate application, package the content and so on. Having a computer do that work means that content can be consumed in sync with its inclusion into the web. In other words, plug content into the web using a standardized semantic markup language (sLML) and then have your web based processing agent make sense of it, compile it, distribute it, track it and anything else, all at run time.
Final Vision
A standardized semantic markup language based on the science of instructional design for the learning community gives the community a powerful foundation to start ‘feeding the web’ with content that can be understood by machines. That understanding includes who the content is for, when do people need the content, what the subject of the content is, the learning hierarchical status of the content (memorize, explain, apply, etc), the language of the content and more. Once a machine can ‘understand’ the content, machines can ‘process’ the content into many different applications, including mobile delivery, eLearning, performance support tools, print based documents, etc. It also means that anyone using sLML, who has created personalized processing agents can grab any content using sLML and have it processed based on their individual specifications. To be clear, the processing includes building code such as HTML, PDF or flash in real time at the time the content is actually being accessed.
Benefits
Intuitive markup for creating learning content
Drastically reduce development time for print, eLearning, mLearning
Expand content, modify content without having to repackage it into its deliverable
sLML stands for Structured Learning Markup Language. It is an open source XML standard to help learning designers and developers provide meaningful semantic markup to learning or knowledge based content. sLML was designed to support web 3.0 and the notion of performance based learning as opposed to event based learning. sLML provides a rich lexicon of instructionally relevant tags to content data. The sLML schema supports instructional design through the application of terms and concepts from the science of instructional design.
There are two important reasons to use sLML:
1) sLML provides a natural language around the tagging of content that can act as a common technology for processing learning content. What does this mean? In the same way that the Dewey Decimal System acts as a common technology for library sciences, sLML can provide a common framework for different learning applications to process content. ‘Processing content’ in this case refers to a computer’s ability to understand the nature of the learning content, and then distribute it to the appropriate platform, to the appropriate audience, in the appropriate language at the appropriate time. The distribution of content can be print based, computer based, through mobiles or any other distribution channel. Again, if you think about the Dewey Decimal System and how it is used by computers or card catalogs to find and retrieve books, the technology around the Dewey Decimal System is secondary. What makes the technology useful is the Dewey Decimal System itself. In the case of sLML, organizations are free to build their own tools that are relevant for their organization similarly to the different applications built around the Dewy Decimal System.
2) The use of the sLML model is consistent with the evolution of web technology. The web is slowly moving to being the ubiquitous operating system for everybody. More and more people are storing, transferring and using documents and applications directly on the web. As it relates to sLML facilitating the use of the web as a CPU means that computers (not humans) will be able do the actual developing, compiling and distribution of content into eLearning, print, mobile, etc at run time. This is much different than what happens today. Today the process for developing and compiling learning content into packages for distribution is a manual process. People compile content into pages, develop the code for those pages, apply the proper standards into the code so that the content can ‘play’ in the appropriate application, package the content and so on. Having a computer do that work means that content can be consumed in sync with its inclusion into the web. In other words, plug content into the web using a standardized semantic markup language (sLML) and then have your web based processing agent make sense of it, compile it, distribute it, track it and anything else, all at run time.
Final Vision
A standardized semantic markup language based on the science of instructional design for the learning community gives the community a powerful foundation to start ‘feeding the web’ with content that can be understood by machines. That understanding includes who the content is for, when do people need the content, what the subject of the content is, the learning hierarchical status of the content (memorize, explain, apply, etc), the language of the content and more. Once a machine can ‘understand’ the content, machines can ‘process’ the content into many different applications, including mobile delivery, eLearning, performance support tools, print based documents, etc. It also means that anyone using sLML, who has created personalized processing agents can grab any content using sLML and have it processed based on their individual specifications. To be clear, the processing includes building code such as HTML, PDF or flash in real time at the time the content is actually being accessed.
Benefits
Intuitive markup for creating learning content
Drastically reduce development time for print, eLearning, mLearning
Expand content, modify content without having to repackage it into its deliverable
Tuesday, June 8, 2010
How the web wants us to learn
For years now, I've been touting the advantages of structured authoring for learning using Instructional Design as the semantic framework for markup. In so doing, I believe the value of instructional designers will once again be placed in the right spot and the industry as a whole (specifically consumers) would benefit both in dollars and time.
Well....the industry is still moving clumsily along with 'black box' technology, yet the most powerful learning tool in the world has completely changed the way people learn and has also demonstrated what learning could be. I would suggest that the Google search engine has done more for learners and learning than any other million dollar app out there. In fact, I would even suggest that Google has provided a will to the web on how people should learn. People have changed their own expectations and clearly show that they don't want 'event' based training, but rather the tools to give them the answers when they need them. So the web wants us to learn through performance support based paradigms (that may include full courses) and we want to learn that way also.
So why does our industry ignore what's happening and continue to work on technology that the web doesn't understand. Structured authoring for learning is the process that supports learning technologies that are understood by the web. What the web understands, the web can process. What the web can process results in pinpoint information when we need it the most. What platform is more ubiquitous than the web? Why not use it.
Thursday, October 8, 2009
Buy with an exit strategy
One thing I always like to tell people when they are about to purchase eLearning products or services, is buy with an exit strategy. What this means is before you go spend your money or your company's money on a product or service, consider the impact of the product itself or the vendor of the product not being around at some point. It also means buy knowing that you may want to move in a different direction one day.
Some of the biggest mistakes I've seen in the market come from large purchases that do not have an exit strategy. A company pays millions for an LMS, want to switch but the cost of moving content from one LMS to the other is so expensive, that spending millions to renew a license for a product you don't like is actually the path of least resistance.
Now factor in that the money spent designing and developing content, at some point exceeds any money you spend on infrastructure, you would think that decisions around products and tools might be made with a little more caution. Straight up, no bullshit, but 99% of the products and services people purchase are designed with a hook. They are designed to make the exit strategy painful.
One of the greatest assets of using a structured authoring approach based on open source technology means that if I need to move my content and my technology to a new platform...I can. Because form and function aren't hard coded together and that both are built using standard web technologies, means that I can move content to a new delivery and authoring platform without having to reauthor everything. What are you going to do with your content when you switch LMS's? How much money are you going to shell out to move your 'legacy' content?
You might want to rethink your Articulate/Captivate/Lectora/etc strategy.
Some of the biggest mistakes I've seen in the market come from large purchases that do not have an exit strategy. A company pays millions for an LMS, want to switch but the cost of moving content from one LMS to the other is so expensive, that spending millions to renew a license for a product you don't like is actually the path of least resistance.
Now factor in that the money spent designing and developing content, at some point exceeds any money you spend on infrastructure, you would think that decisions around products and tools might be made with a little more caution. Straight up, no bullshit, but 99% of the products and services people purchase are designed with a hook. They are designed to make the exit strategy painful.
One of the greatest assets of using a structured authoring approach based on open source technology means that if I need to move my content and my technology to a new platform...I can. Because form and function aren't hard coded together and that both are built using standard web technologies, means that I can move content to a new delivery and authoring platform without having to reauthor everything. What are you going to do with your content when you switch LMS's? How much money are you going to shell out to move your 'legacy' content?
You might want to rethink your Articulate/Captivate/Lectora/etc strategy.
Monday, September 21, 2009
sLML
Well...edCetra Training has finally released its open source semantic markup language for learning; sLML. It stands for Structured Learning Markup Language. The point of creating and releasing the language is to start a dialogue about a single platform that is diverse enough, scalable enough and rigid enough to power 'on-demand' learning. The language has shyed away from redefining packaging standards and also processing standards and has focused instead on really finding a generic instructional design model that can help folks structure their learning and provide semantic meaning to the content.
With this type of language in place, it allows for the potential of the semantic web to break through. Processing semantic content can happen on multiple levels and through multiple streams. It can happen in real time, or can be packaged and loaded up somewhere. Content can be reused, repurposed and rebranded without having to save the content elsewhere.
The sLML package can be downloaded from sourceforge and we are looking for contributors to help us push this specification through.
With this type of language in place, it allows for the potential of the semantic web to break through. Processing semantic content can happen on multiple levels and through multiple streams. It can happen in real time, or can be packaged and loaded up somewhere. Content can be reused, repurposed and rebranded without having to save the content elsewhere.
The sLML package can be downloaded from sourceforge and we are looking for contributors to help us push this specification through.
Labels:
edCetra Training,
eLearning,
eLearning 3.0,
sLML
Monday, July 27, 2009
eLearning the Religion - The Unfortunate Parallel
It might be because I'm reading the book 'God Is Not Great', but I essentially can't lose this feeling that eLearning and the folks within the industry operate as a 'religion'. Now there are going to be some of you who think this is a good thing. I couldn't disagree more.
Without getting into a debate over 'religion' itself, let me point out some of the characteristics of a religion that I believe are universally true and then discuss how these characteristics when applied to eLearning prove to be a detriment.
1) All religions require a 'leap of faith'. In other words, religion is beyond reason, and if you tried to logically defend it, you would ultimately have to at some point agree that the 'first thing' (refered to as God) can not ever be logically proven as fact and therefore requires a leap of faith. Religion would argue this is because as humans we could not possibly really 'get it' unless through divine revelation.
2) All religions have 'Guardians' of truth that serve as the messengers who spread the word and work to assure the faithful that their faith is well placed and also rebuke the nay sayers.
3) Religions require 'status quo'. Ideally there is no change...ever...since the basis of a religion is thought to come from God him/herself. If it comes from God...it must be true! Sure, there is evolution and modernization...but really this is simply trying to attract the next generation of believers...since the current generation would never want anything changed.
Now lets look at these characteristics and apply them to eLearning:
1) Leap of Faith - People buy products and services all the time before ever having a logical reason or the right information about the product or service before buying. They acknowledge their ignorance and take somebody's word for it, that whatever they're buying will do the job. This is pervasive in the eLearning industry. There are a significant portion of decision makers that are in their position without any knowledge at all of learning, training, development, eLearning, etc who pay big bucks for products and services. Even the people below the decision makers who inform the decision maker are in no position to understand the ins and outs of what they recommend. The end result - the leap of faith - and a big waste of money.
2) The Guardians of Truth - These are the folks that keep the status quo. They reaffirm the faithful and keep away the nay sayers. If you don't believe these exist, take a look at every conference and look at the vendors, speakers and participants. The vendors have spent lots of money on their product so they absolutely want to slow down change to sell as much product as they can. The speakers in the hopes of attracting the largest amount of people must talk about topics that are familiar to everyone and most of the time, speakers are vendors. And then you have the participants. The participants only know what they've heard at conferences. Their education comes from the literature and the conferences that all conspire to market the same message and that message is the one that the guardians of truth want us to hear. The messages that sell the most products and attract the greatest amount of attendees.
3) Status Quo - There is no doubt that when large groups of people all share the same message, as a person of reason I'm going to tend to drop my guard and believe what I hear. The eLearning industry and the organizations that operate in it must try to maintain the status quo to satisfy its participants. Change is intentionally halted and slowed down. It preys on ignorance and systematically keeps ignorance alive by bombarding participants with half truths, lies and simplified ideas.How else could it sell to the ignorant?
If you still think I'm being harsh, think about eLearning 2.0? How many people know where the '2.0' comes from? How many people know what it refers to? How many people understand the concepts implicit in tying '2.0' to learning? My guess is the answers to the questions are 'not many'. If thats the case, why is everybody talking about it? Where is the research? Where are the nay sayers?
eLearning needs the nay sayers. We need to educate ourselves so that we can make informed decisions, not decisions based on a leap of faith. We need to challenge the orthodoxy. We need to push our institutions to showcase what the 'others' are saying. What organization out there is going to take this challenge?
Without getting into a debate over 'religion' itself, let me point out some of the characteristics of a religion that I believe are universally true and then discuss how these characteristics when applied to eLearning prove to be a detriment.
1) All religions require a 'leap of faith'. In other words, religion is beyond reason, and if you tried to logically defend it, you would ultimately have to at some point agree that the 'first thing' (refered to as God) can not ever be logically proven as fact and therefore requires a leap of faith. Religion would argue this is because as humans we could not possibly really 'get it' unless through divine revelation.
2) All religions have 'Guardians' of truth that serve as the messengers who spread the word and work to assure the faithful that their faith is well placed and also rebuke the nay sayers.
3) Religions require 'status quo'. Ideally there is no change...ever...since the basis of a religion is thought to come from God him/herself. If it comes from God...it must be true! Sure, there is evolution and modernization...but really this is simply trying to attract the next generation of believers...since the current generation would never want anything changed.
Now lets look at these characteristics and apply them to eLearning:
1) Leap of Faith - People buy products and services all the time before ever having a logical reason or the right information about the product or service before buying. They acknowledge their ignorance and take somebody's word for it, that whatever they're buying will do the job. This is pervasive in the eLearning industry. There are a significant portion of decision makers that are in their position without any knowledge at all of learning, training, development, eLearning, etc who pay big bucks for products and services. Even the people below the decision makers who inform the decision maker are in no position to understand the ins and outs of what they recommend. The end result - the leap of faith - and a big waste of money.
2) The Guardians of Truth - These are the folks that keep the status quo. They reaffirm the faithful and keep away the nay sayers. If you don't believe these exist, take a look at every conference and look at the vendors, speakers and participants. The vendors have spent lots of money on their product so they absolutely want to slow down change to sell as much product as they can. The speakers in the hopes of attracting the largest amount of people must talk about topics that are familiar to everyone and most of the time, speakers are vendors. And then you have the participants. The participants only know what they've heard at conferences. Their education comes from the literature and the conferences that all conspire to market the same message and that message is the one that the guardians of truth want us to hear. The messages that sell the most products and attract the greatest amount of attendees.
3) Status Quo - There is no doubt that when large groups of people all share the same message, as a person of reason I'm going to tend to drop my guard and believe what I hear. The eLearning industry and the organizations that operate in it must try to maintain the status quo to satisfy its participants. Change is intentionally halted and slowed down. It preys on ignorance and systematically keeps ignorance alive by bombarding participants with half truths, lies and simplified ideas.How else could it sell to the ignorant?
If you still think I'm being harsh, think about eLearning 2.0? How many people know where the '2.0' comes from? How many people know what it refers to? How many people understand the concepts implicit in tying '2.0' to learning? My guess is the answers to the questions are 'not many'. If thats the case, why is everybody talking about it? Where is the research? Where are the nay sayers?
eLearning needs the nay sayers. We need to educate ourselves so that we can make informed decisions, not decisions based on a leap of faith. We need to challenge the orthodoxy. We need to push our institutions to showcase what the 'others' are saying. What organization out there is going to take this challenge?
Subscribe to:
Posts (Atom)