Archive for the ‘Miscellaneous’ Category

analogEarlier today a friend of mine, Frank Bridges, posted a link to an article about a new Facebook app called Paper. He made a comparison between Facebook and Instagram and MTV and its subsidiary VH1, a damned good comparison that makes it a paragraph worth reading.[1]

If you go back to boyd & Ellison’s Social Network Sites: Definition, History, and Scholarship (2008), they lay out the chronological history of the major social networking sites from SixDegrees.com (*sniffle*) to Facebook (which at the time it was published was had only been opened to the general public for less than a year).  What is clear is that the popularity of any given social networking site seems to follow a pattern. It builds, generally driven by a youthful tide, peaks, seems to collapse in on itself,  and as the popularity recedes, the “next big thing” comes crashing on-shore. (The additional part of that cycle I’ve noticed is that the big ones like Friendster and MySpace, seem to redefine themselves and come back as niche sites). Facebook came along in time to hop on the top of the mobile wave and have been able to ride it pretty steadily since about 2005, far longer than any other site.

To quote from an earlier blog post of mine:

Facebook benefited from 2 things that I think gave them a longer lifespan than their predecessors. First, it had a built in population of users by the time in opened to the general public in 2006. By coincidence or design (and probably a bit of both) the progression of their rollout populations was very smart. By the time they opened up to the general public, young people from about 14 to 25 were already acquainted and comfortable with the brand and usage expanded up and down from there. Its ascendancy also coincided with the dramatic uptick in the adoption of mobile technology. This meant that you could carry your entire social network in your pocket (well, at least the people that were also on Facebook).

I’ve always seen their growth strategy up to about 2009 as being very simple: “how do we make the site sticky eno

Facebook addressed this in 2010 by picking up the pace of the site’s investments in technologies and sites that allowed Facebook to enhance the services it provides to users at either end of the spectrum including the  2012 acquisitions of Lightbox and Instagram (Wikipedia, 2012; “Facebook Newsroom,” 2012, “Forbes,” n.d., “Inside Facebook,” n.d.;).  (They added other functions and sites to meet the needs of other site stakeholders but we’re not looking at that right now).ugh to retain the users and seductive enough to convert the non-users ”. I think some very prescient folks realized that Facebook would lose its cachet among teens and 20somethings as their parents and *grandparents* swelled its ranks. Really, who wants to go dancing at the same club their parents go to? The Pew Internet & American Life project told us that teens are “diversifying their social network portfolio” (Madden, 2013); keeping the Facebook account while using other sites they perceive of as having less drama and fewer adults.

Instagram is their attempt to retain the lion’s share of the youth audience; it’s MTV. I know a young man in junior high school who isn’t very interested in having a Facebook account but who thinks his Instagram account is awesome.  Paper, on the other hand, is VH1 an attempt to retain the late boomers/early gen Xers who are still ambivalent about growing role technology is playing in their ability to connect with their family and friends as well as to offer something fresh and new to their original core audience. Heck, they even include a guy using a manual typewriter in their promotional video!

Well played, Facebook, well played.


Boyd, D. M., & Ellison, N. B. (2008). Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication, 13(1), 210–230. doi:10.1111/j.1083-6101.2007.00393.x

Contributors, M. (2012). Facebook, From Wikipedia, the free encyclopedia.

Facebook Newsroom. (2012).

Forbes. (n.d.).

Inside Facebook. (n.d.). Retrieved February 03, 2014, from http://www.insidefacebook.com/

Madden, M. (2013). Teens Haven’t Abandoned Facebook (Yet). Washington, D.C. Retrieved from http://www.pewinternet.org/Commentary/2013/August/Teens-Havent-Abandoned-Facebook-Yet.aspx

[1] “Facebook is now the VH1 and Instagram is the MTV. Years ago I remember I was watching VH1 all the time and I wondered how the hell that happened since I had never watched the channel before. Then I realized that not only had I changed, but so did VH1 and that was a planned thing, because many of my generation had stopped watching MTV. Facebook is bleeding young people at the moment, because they are using Instagram more. They are communicating with images and hashtags. FB’s Paper is a way to keep us older folks who like to read tangible objects and write with tangible objects”.

Read Full Post »

I’m not talking about technology today.  Today I’m talking about social history, triggered by something I read that made me angry. This is personal and anecdotal but it’s a topic that I’ve give some thought to over the years.

Today, the Washington Post posted an opinion piece by Dana Milbank called “The Weakest Generation?”  Please some time to read it, it’s worth taking in his point of view.  In it he talks about his parents attending the Great March on Washington and he quotes his father as saying, “’ When people talk about Martin Luther King, that’s my connection. It’s a small connection — no handshake or anything — but I’m proud to have been there.’”

This piece is the conceit of the most privileged of the baby boom generation filtered through one who embraced their self rewarding worldview.

His thesis is both wrong and insulting. How can he say of his, of *my*, “[w]e grew up soft: unthreatened, unchallenged and uninspired. We lacked a cause greater than self.” Isn’t that the same charge leveled at his parents’ generation by *their* parents. He’s internalized Boomer bullshit and regurgitated in this editorial.

Curating and passing on history has always been the dominion of the elite and the Boomer generation is no exception. Those who tell the tales Milbank takes as truth were able to go to college and had the free time to attend events like the Great March on Washington.  When they got out of college they went into positions that afforded them to freedom to write about their experiences as though they were nearly universal and to filter the experiences of others through their lens.

Like all older generations, they would have us believe that they made a lasting, positive difference in the world.  Well, that’s true for every generation. Whether it is a World War, Civil War, assassinations, financial upheaval, or fights for voting rights, every generation has had those historical movements and moments that marked its soul and shaped its legacy.

Let me put this into perspective for you: Boomers had a good time at Woodstock, my generation had a good time at Live Aid and contributed to a serious cause.  (And, for the record, his father is no more connected to Martin Luther King, Jr than I am to Madonna just because I was in JFK stadium that blazing hot day.)

They had the BC pill freeing them to enjoy a level of sexual freedom and be open and public about it. When most of us were beginning our sexual lives, AIDS was the ugly specter peering over our shoulder.

Boomers were raised, for the most part, in an age of prosperity and relative financial security. If you grew up in the 70s and 80s, you were much more likely to have had 2 working parents, or divorced parents, or live in blended families. or live through periods where a parent was laid off from their job.

When we were kids, having enough gas to power our cars became a serious, tangible issue.

After September 11, 2001, we may have been told to go shopping, but I also remember in the weeks after, men of all ages *volunteering* to go into the military. I remember all of the people who volunteered to help rebuild New Orleans and those who showed up to assist at my beloved Jersey Shore after Hurricane Sandy.

No, Mr. Milbank, is mistaken if he thinks our generation is untested by trial…[and] squandering American greatness by turning routine give-and-take into warfare”. As every other generation, we have our challenges and we, like any every other generation, have risen to those challenges. Sometimes only partially, often imperfectly, but we rise and will continue to do so.

And you know something, so will the Millennials, who are coming right up behind us, and their children and grandchildren and every successive generation.

Shame on Dana Milbank, shame on him for foisting his weak, biased version of social history off on us.

Read Full Post »

When crisis survivors[1] of begin to face the public, often they appear on the TV interview circuit: Anderson Cooper, The Today Show, Good Morning America, etc.  Every host asks the same half dozen questions and every interview is punctuated by the same news footage; the only thing that changes is the set and who was asking the questions. Hannah Anderson[2] threw broadcast journalism into a bit of a tizzy last week because she (unintentionally) flipped the script.

Prior to her kidnapping Anderson maintained an account on the website Ask.fm.[3] As soon as she got home, she took to that site and answered questions from anyone who asked directly and with no filter. She also made a point to tell those identifying themselves as journalists that she would not answer their questions and that they should leave her family alone.

Why Hannah went to that site only she can answer, maybe she wanted to do something mindless, maybe this was an effort to get back to normal, maybe she wanted to see if people had questions for her. What we do know is that the questions and comments ran the gamut from flirtatious to sympathetic to prurient. At times her answers were blunt:

[q] Why didn’t you tell your parents he creeped you out?

[a] In part, he was my dad’s best friend and I didn’t want to ruin anything between them….

[q] Are you glad he’s dead?

[a] Absolutely”(Wian, 2013).

Almost right away, news organizations began hitting up every psychologist, social worker and social media “expert” they could find to comment on this. Some handwringing sob sisters took to the airwaves and Internet questioning why she did this and about how inappropriate it was for her father to allow her access to social media. Others recognized that as a child of the Electronic Social Media Age, Anderson’s actions were not surprising and in fact, could even be considered healthy.  Others still just published screen caps of her account and wrote scant commentary around it. (I’m not including a bunch of citations here as the online commentary is easily googled).

This was different and I’m not sure that the media knew what to do. With her blunt talk, selfies and shots of her new manicure, Anderson didn’t fit the model of “what a victim does”. Was some of the the traditional press squawking at the thought of being pointedly and publicly, cut out of the picture? It is certain that Matt Lauer wouldn’t ask some of those questions that she answered.

According to Baym and boyd, “[P]eople… use the public and quasi-public qualities of social media to carve out safe identities for themselves in the face of legal troubles, create public memorials for the dead, [and] narrate their own stories….(Baym & boyd, 2012). Isn’t that just what Anderson did? In immediately taking to social media, Anderson (quite unknowingly I’m sure) did just that. She put her unedited narrative out there without the help of a broadcast media outlet. If you asked her why she did it, her answer might not be the same as Baym and boyd’s in letter but I bet it would match the spirit.


Baym, N. K., & Boyd, D. (2012). Socially Mediated Publicness: An Introduction. Journal of Broadcasting & Electronic Media, 56(3), 320–329. doi:10.1080/08838151.2012.705200

Wian, C. (2013). Friend: Hannah Anderson discusses kidnapping on social media. CNN.com. Retrieved August 18, 2013, from http://www.cnn.com/2013/08/14/us/hannah-anderson-social-media

[1] I use the term survivor with great intention. I refuse to call anyone who gets through something like this a victim. Increasingly I find that term diminishes the individual by casting them in the role of the captive, the sufferer. The word “survivor” looks towards their future. You are only a victim until it is over.

[2] In August, 2013, Hannah Anderson was kidnapped by a family friend who killed her mother, brother and dog. After an Amber Alert and multi state search, the two were found about a week later and she was rescued. Her kidnapper was killed after firing a gun at police.

[3] Ask.Fm is a European based site where users, who can choose to remain anonymous, can ask other users questions about pretty much anything. The answers to every question appears on the user’s home screen in the form of an extended Q&A

Read Full Post »

This has been a busy summer. I’ve had some big things in the works that have kept me away from my beloved blog here but I’m going to remedy that. For the time being I will be using this blog to make shorter posts, maybe even Twitter sized, as a way of capturing ideas that I may not have the time to write an expanded essay on but want to return to at a later time.

My biggest news is that I am going to Purdue University 


Beering Hall, Home of the Brian Lamb School of Communication, Purdue University.

for my PhD.  I’m thrilled to work with the fantastic faculty and the other students I’ve connected with have been friendly and engaging. I know I will be challenged and stimulated. I am preparing to leave New Jersey early next week and this next part of my life in The Academy begins in mid August.

My new email academic address is  pjeter@purdue.edu

While I will miss my friends and colleagues at Rutgers University, the nice thing about being an academic is that those connections are never really broken. They are now my collaborators and fellow alumni. It’s not the end but a rite of passage, a transformation,  and that excites me a great deal.

Along those lines,  I will be co-presenting two papers at the National Communication Association 99th Annual Conference in Washington DC in November. If you are going, please  look for me, I’d love to grab some coffee with you. (OK, I love coffee period but I’d love to connect with  anyone who reads my blog). I’ll discuss those papers a *tiny* bit more in a later blog. I’m not giving too much away, though, I want you to come see the presentations!

Keep watching this space; my adventure continues.

Read Full Post »

Although my main interest in in social interactions and identity online, I am also enamored by pop culture and memes are one of the foremost manifestations of modern pop culture. In his book, The Selfish Gene (1989) Richard Dawkins coined the word meme to describe a unit of cultural transmission. He adapted it from the word gene which is a unit of physical attributes that are passed on from generation to generation.[1] The idea is that like genes and viruses, memes move from person to person through social contact.[2]

Radio had its Hindenburg disaster, Welles, “War of the Worlds” and coverage of the bombing of Pearl Harbor. TV had its “Lucy Ricardo Has a Baby”, “Who Shot JR?”, and The Superbowl.[3] This infographic is an amusing amble through the Internet’s contribution to our collective social memory.

The radio and TV events I mentioned look dated to us was struck by how old fashioned the early memes look today, a scant 20 years later. . Consider how dated The Dancing Baby looks to people who were born in 1994. I remember how state of the art it seemed at the time, a lot of older computers couldn’t even run it at full speed. [4]  It was (by the standards of the day) memory intensive and pushed the graphic capabilities of many computers.  Today, in an age where video games look almost as lifelike as movies, the 3-D rendering of The Dancing baby looks rough and unsophisticated.

Enjoy the infographic (If it’s tough to read, you can click on it to make it larger).

[1] While things such as intellectual or musical ability may at first blush not seem to be physical traits, your genes only carry the potential for the physical capability of a person’s brain or body to be predisposed to perform a certain function better than others. Once can carry a gene that gives them the potential to be a world class swimmer.  If they don’t nurture that talent through practice and competition, that gene is still there and available to be passed on even if the carrier didn’t take advantage of it.

[2] And yes, I am including union of ovum and sperm as a type of social contact. I recognize that under some circumstances, the people contributing the genetic material have no direct social contact. I do consider the contact they have to be mediated by the medical personnel and technology that fosters fertilization.

[3] Like we really believed that she and Ricky slept in double beds. Oh 1950s TV you were so full of the lulz!

[4] You can view The Dancing Baby here-à http://www.dancing-baby.net/Babygif.htm  Go here to see The Hamster Dance à http://www.findmyhosting.com/web-hamster/

History of Memes

Read Full Post »

In 2001 Marc Prensky coined the term Digital Native. He defined this as young people (since this was written in 2001 he was referring to people born from roughly 1985 after). Who, “spent their entire lives surrounded… all the other toys and tools of the digital age” (Prensky, 2001).

I think he made a good point, children born in the last two decades of the 20th century were born into a technological environment that was very unique. While this could be said about any generation, this group’s technological experience was colored by significant cultural changes that shaped the century.  The United States Census Bureau issued a report that stated that 1961 17% of mothers returned to work within the first 12 months of giving birth, by 2007 that figure was 64% (United States Census Bureau, pp. 13-14).

Children born in 1961 had a few ways of interacting with others, primarily telephone and US mail. National broadcasting was something that was confined to a few large media institutions. You had to be selected by one of them to be seen in a broadcast produced by others. Television and movies consumed by children and youth in 1961 were constrained in the topics that could be discussed, the use of profanity and the images that could be shown. Compared to children in 2001, the scope of their world was smaller and more closely controlled by their community (familial and geographical) and options for interaction were more limited as well as more easily supervised.

Children born 30 years later grew up in a very different cultural landscape.  They had more options for one on one interaction, along with that they had the chore of deciding which tool they would use to communicate with various members of their social network (ie. mailing a grandparent a thank you note for a present  versus emailing a parent on a business trip).  They were able to directly broadcast their thoughts and actions to national and even international audiences without an intermediary controlling the broadcast. They were using technologies that were not always completely understood by their parents and other adults. At the same time that Prensky was writing about children and youths as digital natives, Bovill and Livingstone (2001) described children in First World countries as inhabiting a “bedroom culture”. They described homes in Western societies it is taken for granted that most of the children had their own bedrooms that are filled with electronics such as radios, TVs, computers, iPods, etc.

My issue with the term Digital Native is that they aren’t natives.

In Merriam-Webster Online Dictionary, these are the first 6 definitions of the word native:

1: inborn, innate <native talents>

2: belonging to a particular place by birth

3 archaic: closely related

4: belonging to or associated with one by birth

5: natural, normal

6 a : grown, produced, or originating in a particular place or in the vicinity : local

b : living or growing naturally in a particular region : indigenous

All of these definitions carry with them the notion of a place where someone is born and/or bred or something that is intrinsic to who they are[1]. No one is born digital or as a resident of The Internet; living the digital life is something learned; humans need to become literate with it. This is not an odd or new concept. We are not born knowing how to use a telephone, little children may imitate their parents’ behavior and body language while chattering into a phone (and I’m looking at you my beloved nephew) but they have to be taught how to make a phone call. Children may sit in front of a TV but they need to be taught how to change the channel (Ok, show of hands, how many of you used to think that the people on TV lived inside the cable or TV itself?)

If you read the entire article by Prensky, he was specifically referring to education, K through college (Prensky 2000) but even in that case I don’t like that phrase. Aside from these phrases feeling disrespectful to immigrants, his paper seems to suggest that people who were born before the digital are will always have a handicap, he calls it and accent, navigating through digital culture and that people born in it “They thrive on instant gratification and frequent rewards. They prefer games to “serious” work” (Prensky, 2001).  He implies that using technology is something that is fundamental to post digital people. I would say that, like reading, navigating and being able to vet information online is a learned skill. While it’s use is something assumptive and a necessity in today’s business world, it’s not a fundamental skill because becoming digital involves “old fashioned” skills such as reading, logic, communicating clearly, etc.

A big thing I see is missing completely is while the amount and form of media has exploded, how the human brain perceives and retains information has not. Also, sound instructional design is sound instructional design regardless of the media used to convey the message. An adjunct to that idea is that in a classroom situation, a good instructor can work with the group they are given. Look, it was so-called Digital Immigrants who designed much of the “Digital”, it’s not a foreign land but a land they designed and built.  I think these phrases just serves to create generation gap that I’m not really sure exists to the extent that he portrays.

Prensky is now talking about something called “Digital Wisdom” which means finding the best combination of mind and technology. He talks about technology as enhancing human beings. I liked this idea when Vannevar Bush wrote about Memex as a form of brain extension in As We May Think back in 1945.


Bovill, M., & Livingstone, S. (2001). Bedroom culture and the privatization of media use.

Children reporting online: The cultural politics of the computer lab. (2004). Television & New Media, 5(2), 87-107. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=ufh&AN=13081297&site=ehost-live

United States Bureau of Labor Statistics. (2012). Employment characterisitcs of families summary. (Economic Release No. USDL-12-0771). Washington, DC: United States Department of Labor

[1] While one might argue that language is not something inborn or intrinsic, I would say that language falls under definition #4. Even though a child may not be able to speak until 18 months or so, they are developing language skill from the beginning. I once read somewhere (and I can’t remember where so I’m not able to verify this) that by the time a baby is four or five months old, the noises they make are the noises they need to speak the language(s) of the people their caretakers.

Read Full Post »

Is Facebook Doomed? is the kind of article that irks me.

It quotes a financial analyst named Eric Jackson who said, “In five to eight years they  [Facebook] are going to disappear in the way that Yahoo has disappered [sic]”. Anyone who makes the  grand pronouncement that by 2020 FB will have gone the way of Yahoo is stating the obvious. Of course it will and the most junior of students of social media can tell you that.

It’s what Nicole Ellison and danah boyd told us back in 2008. In their article “Social Network Sites: Definition, History, and Scholarship”, the “History” part tells the story of social networking sites (SNS). From 6 Degrees to Friendster to MySpace to Facebook, all of these sites grow, dominate their landscape for a few years (except for 6 Degrees which *created* the landscape the subsequent SNS inhabited), and then sharply contract as their users migrate elsewhere. However, they don’t disappear; instead, after a period of dormancy and realignment, they reinvent themselves. Friendster did it, MySpace has done it and there were rumors a few years ago that 6 Degrees was trying to reboot itself (but the new “invite only” iteration seems to have sunk beneath the waves).[1]

It has been 6 years since Facebook opened up to the general public. It’s already been at the top of the SNS game twice as long as MySpace was. All social media sites have a lifespan, they end up declining either because they don’t have a critical mass of users to support they become so big that they implode as new users flock to the next big thing.

Facebook benefited from 2 things that I think gave them a longer lifespan than their predecessors. First, it had a built in population of users by the time in opened to the general public in 2006. By coincidence or design (and probably a bit of both) the progression of their rollout populations was very smart. By the time they opened up to the general public, young people from about 14 to 25 were already acquainted and comfortable with the brand and usage expanded up and down from there. Its ascendency also coincided with the dramatic uptick in the adoption of mobile technology. This meant that you could carry your entire social network in your pocket (well, at least the people that were also on Facebook).

TPTB[2] might revoke my “Like” button, but I’m predicting that the innovation that supersedes Facebook will be here within the next 3-5 years.

I don’t know exactly what it will be but it will come from an industry outsider (Sorry Google but I’m channeling Granovetter here, innovation comes into a network from without and you’re too strongly tied to the rest of big tech, you are an insider).  I also predict that when it happens the remaining users will not be young people, but people 30 and older. This is because their weakest connections are the more sentimental ones from their past and Facebook facilitates a high level of ease in maintaining those ties. I predict that older users will be less likely to move to a different platform when so much of their history, people as well as artifacts, is already embedded within the site.

Finally, I think that whatever succeeds Facebook will have a highly customizable user interface but a very stable base. Right now, Facebook seems to tweak a notable feature every 6-12 months, it changes the layout and the usual outcome is the people complain for a while until they become acclimated. I’m predicting that the successive technology will have a user interface that is modular (you can swap elements in and out as you desire), but the basic screen will remain fairly consistent. This will enable the site to add new modules for users to plug into their personal interface if they so choose. The process of changing up the interface will be WYSIWYG[3].

Aaaand I think I just described a smartphone, lol.


boyd, d. m., Ellison, N. B. (2007). Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication, 13(1), Article 11.

Dimmel, Brandon. (2012) Is Facebook doomed: analyst predicts site irrelevant by 2020.  http://www.manolith.com/2012/06/05/is-facebook-doomed-analyst-predicts-site-irrelevant-by-2020/ Retrieved September 23, 2012.

Granovetter, M. S. (1973). The strength of weak ties. The American Journal of Sociology, 78(6), pp. 1360-1380.

[1] The stated goal of 6 Degrees was to connect with Friends of Friends (of friends of friends and so on up to 6 nodes away) for informational and/or recreational purposes. They even showed you a diagram that would look very familiar to social network analysts where you were the central node, people you knew were connected to you by a line and were at the center of their own network systems.

I found the idea of 6 Degrees making a return tantalizing though because that was the first SNS that I used. While anyone looking at the interface would recognize it as an SNS the way we are used to today, its downfall is that there just weren’t enough Internet users who used the site to sustain it.

Only about 23%of US adults had Internet access in 1996, the year 6 Degrees rolled out. Internet users were still relatively elite group technologically, socio-economically, as well as by race and gender. (Suffice to say, my sister and I were oddities in the online world.) First, they had the financial resources to purchase a computer that would have been fast enough get you on and around the Internet in the first place. Then you needed a modem (which was generally purchased separately from your computer) and the money to pay the monthly Internet access charges. They also needed the technical know-how to set up their modem (do they even still have those master and slave switches inside a computer anymore, lol).

I also wonder if a large portion of people who might have used 6 Degrees were already networking through sites like Usenet, Prodigy and local online communities like The Well.

[2] The Powers That Be

[3] What You See Is What You Get

Read Full Post »

This is slightly off the topic of online communities but does deal with technology. Someone shared the video below with me and I was struck by the rush of technology from crude wooden tools to rockets. Watching it, I had a couple of thoughts technology over time.

Technology makes some people very uncomfortable because it changes the world, sometimes in profound wave. .

Sherry Turkle and other digital dystopians believe that CMCs are stripping humans of our ability to connect with other people. Rather than encountering new people and situations as we pass through the concrete world, we dive down the rabbit hole of the Internet and select who and what we are exposed to.

Technology, especially communicative and travel oriented technologies, have been greeted by a Greek chorus saying that *this* will be the technology that destroys our family and puts our youths at risk. Before the Internet it was TV, radio, automobiles, bicycles, the machinery of the industrial and a thousand other inventions and ideas that were branded as dangerous to society. Every time we have extended capabilities as humans, there are people who see it as “bad” as opposed to just seeing it as change.

I take a different point of view: technology is just an element like carbon or sodium. It’s not good, not bad but neutral. It’s what we choose to do with that technological element that is invested with a moral position. I am softly deterministic in that I believe that:

  • The progress of technology is inevitable. The minute something new is introduced, someone is immediately working on some variation that makes it better (for them at least)
  • It is inevitable that evolution of technology will be a major factor in the evolution of society. I do think that there are other factors that are as important but I think that most of those are reactions to or implementation of technology driven by technology on some level.
  • Every technological element gets used for both good and bad purposes.

One example are the changes in society today that has given many workers the 24/7 work day. While there were always people who were on call (doctors, for example), however, today, many more workers are issued cell phones, pagers and other technologies that tether them to the workplace. If we look back into history, there are other examples. The industrial revolution began as the tail end of the 18th century and stretched into the mid to late 20th. One of the factors that drove the image of America as a country with the streets paved with gold were jobs and especially jobs in the industrial centers. Not only did people move from the farms into the cities for work but teaming masses came to the United States from all over Europe to work in the cities. I question whether we would have had that same level of immigration had industrial technology never been invented.

My other thought about technology is that the pace of development seems to be constantly increasing. Most people reading this will be familiar with Moore’s Law (no relation) which talks about the exponential rate that the speed and capacity of computers. I think it goes beyond that. It seems as though the rate at which all technology is being developed and introduced is speeding up. I’m not sure if that’s true but looking at the video it seems that way.

Read Full Post »

I stumbled on this via on of the Twitter feeds I follow (Barry Wellman or danah boyd maybe?) and found it an amusing trip in a time machine.

January, 1983, Time Magazine declared that the personal computer was Time’s Machine of the Year for 1982 (beating out, among others, Steve Jobs). They stated that 4 million Americans were online, which was about 1.7% of the population. By 1995, the year of this PSA, Internet adoption was still only 14% (The Pew Research Center’s Internet & American Life Project, Internet Adoption 1995-2012). Rereading the articles from that 1983  issue of Time,  it showed that the landscape of computer users was a place that was predominantly white, male, economically advantaged and technologically elite[i]. The majority of people they spoke with saw computers as becoming as ubiquitous any other households appliance but it doesn’t seem that they saw it as a replacement for their media sources such the radio, TV, etc.

By 1995, a personal computer was still primarily an information tool, an electronic manifestation on Vannevar Bush’s memex; a replacement for the family typewriter; and a novel (and economical) way to instantaneously communicate asynchronously across town or across the globe.  However,  the script for this PSA projected that by the time these 10 year olds were in college (2003-2004ish) the Internet would be the TV, phone, shopping mall and workplace.

I see this as bolstering Tim Berners-Lee’s expressed opinion that the term Web 2.0 was jargon and that the whole purpose of the World Wide Web[ii] from the beginning was to be a collaborative space that facilitated human interconnectivity.   In 2005, he said about blogging (perhaps the poster child of Web 2.0), “Every person who used the web had the ability to write something. It was very easy to make a new web page and comment on what somebody else had written, which is very much what blogging is about”. That people saw the Internet as place to connect socially, professionally and commercially would have been no surprise to Berners-Lee.

I got online back in 1991 and I recall that my social circle was amused that my sister and I (she had gotten on the Internet about 4-6 months before me) had home computers. The most common question I got was, “what do you do with it?” Email and productivity software was common in the workplace and that was how I primarily used it at home: sending email to one of the few people I knew online, following a few Usenet groups, writing, doing some work from home.

At the time this time this PSA was made, I think I had already moved from CompuServe to AOL (or was about to)[iii]. Amazon.com came online in 1995 but was still just a book seller. Classmates.com also came online that year (which I would cite as being *the* FaceBook). Sixdegrees.com, which I would peg as the first social networking site most of us today  would recognize as such, would launch the following year, 1996, but sputter out before the turn of the century. YouTube was still a decade away as was Facebook (and guess who was a 5th grader back in 1995?).

However, as prescient as the writer of this PSA was (the YouTube description give the name Cindy Gaffney), the Internet was still seen as a tool, a service provider that built on existing existing communication tools. However the fruits of these predictions were there but in their infancy.

  • There were rudimentary phone services (I can’t remember the name but I remember reading about it when I bought a modem, I’m sure it was expensive, complicated to implement and that the quality was poor);
  • There was online retail. Amazon’s 1995 start date was quickly followed up by EBay in 1996.
  • There were brief animations on the Internet. I can’t remember the exact year but I think the Hamster Dance and the Dancing Baby came out around 1997ish. It took a critical mass of Broadband users to make high quality videos (and by extension Internet television) viable[iv].
  • As I mentioned earlier, one of the reasons I got a computer was so that I could work on extra projects at home. I didn’t have the authorization to upload material directly onto the organization’s server but I could work, save it to a floppy disc and bring it to work with me. The introduction of laptops increased this activity.

The function not explicitly predicted in the video is the Internet as a virtual agora and major role it’s played in the maintenance of social network ties: blogging and social networks sites.

The action of blogging is older than the term, that should come as no surprise to anyone reading this. I remember that some of the earliest personal sites on the WWW were crude versions of what most of us would call a blog: updates on a person’s activity, his (or less commonly her) thoughts and ideas. Some may have had pictures. I’m not sure that anyone realized how much so many of us had to say. In addition,blogging has served the very important function of providing a focal point for societal subgroups and outliers to coalesce around and form their own communities.

While you can build a case for predicting using the Internet for a telephone as a tool for maintaining social network ties, social networking sites have taken it far beyond that. It’s more than being able to shoot an email to a good friend after you’ve moved out of the neighborhood. You can still maintain a level of involvement in each others lives that wasn’t possible before through (a) more frequent incidental interaction, (b) exchanging pictures and videos of important private and public local events (sometimes within less than 5 minutes of an event occurring). So while they might live 1000 miles away, they can see video of their daughter’s 7th birthday party or and annual block party. You can also get to know their friends more easily because you are all sitting in a virtual room together conversing with your common acquaintance.

I’m not sure if anyone predicted this 15 years ago (If anyone reading this knows of anything like this please let me know, I would love to read it).
Finally a few other oldies but goodies:

This is an AOL commercial from about the same time as the PSA above (195)

This is a news segment about high tech gifts for Father’s Day. (I don’t know what I know this but the “Dad” in this piece is Mike Jerrick.)

The First World Wide Webpage


boyd, d. (2012). Danah boyd’s twitter account. Retrieved August 6, 2012, from https://twitter.com/zephoria

Bush, V. (1945, July). As we may think. Atlantic Monthly,

Friedrich, O. (2003, January 3). The computer moves in. Time Magazine,

Laningham, Scott (podcast Editor, IBM developerWorks).developerWorks interviews: Tim berners-lee (audio podcast)

Pew Internet & American Life Project. (2012). Internet adoption 1995-2011. (). Washington, DC: Pew Internet & American Life Project.

Wellman, B. (2012). Barry wellman’s twitter account. Retrieved August 6, 2012, from https://twitter.com/barrywellman

[i] Although this wasn’t the Internet, I remember using WordPerfect in the late 80s and early 90s. Formatting involved remembering a function key combinations and being able to troubleshoot a document that didn’t look right involved interpreting the code associated with your document. And computers were not cheap. My first computer cost $2100 in 1991. As point of reference, my current desktop, which has the largest hard drive available when I bought it last year, ran about $1200 in 2011 (about $770 in 1991 dollars).

[ii] For context, when I refer to the Internet, I’m talking about the tool that grew out of Arpnet, into academic institutions at large and then to the general public: the interconnected computer networks that connects us through computers, smartphones, tablets, etc. There were several protocols by which a user could connect to the Internet, the most popular one is the World Wide Web. The World Wide Web is a term that a lot of laypeople use interchangeably with the Internet. While the Internet allowed us to connect, the World Wide Web enabled hypertext, one click links within a document that navigated the user to another document with related information, and multimedia. Web browsers, from Mosaic to Firefox to Chrome, provided a graphical user interface that which made the WWW more accessible to users who were not as technologically adept as the Internet pioneers. Going onto the Internet became less like reading a book on a screen and more like reading a colorful magazine that may also include sound and moving pictures.

 [iii] In the mid 90s, AOL put on a full court press to get subscribers. In order to used the service you needed the floppy disc (later a CD Rom) with the software to get you online and set up. This media was *everywhere* they would blanket mail neighborhoods, put the discs in magazines, I even remember my local library having a display with the dreaded discs (I suspect they made contributions to public libraries for that sort of access). New subscribers for 10 free hours of AOL access. Back then, they charged you by the hour for access. AOL didn’t go to a flat fee service until Oct 1996 (Wikipedia: AOL)

 [iv] I wonder about the role of Saturday Night Live and music videos in creating fertile ground for the “VidClip Culture” (I should probably add Sesame Street here since I’ve read in a couple of places that Sesame Street was one of the inspirations for the MTV style of short, fast bits of motion, sound, color and music). Do any media scholars have thoughts on this?

Read Full Post »

Asexuality Comes Out of the Closet.
And in the “great minds think alike” category, this news article appeared on the Rutgers University Media Relations site. One of the professors in Rutgers School of Social Work wrote an article discussing how many asexuals are comfortable with asexuality as a part of their identity. It is also interesting to note that like the LGTB community they have a flag whose stripes represent the different facets of asexuality.
From a Social Work standpoint, how people integrate asexuality into their identities (pathologized vs “normal”; curing the individual vs fighting for societal acceptance, etc) is an important part of working with the individual to help them integrate into society. Approaching it from a Communications point of view however, how does a community reframe societal discourse and shift itself from a pathology (something invalid and something that needs to “healed”) to a sexual orientation (something that is socially valid and meriting protection under civil rights laws)? How do they develop their unique language and symbols?  What are the network ties between prominent members of community and how have they developed over the history of the community?


Questions. I’ve got a million of them.

Read Full Post »

I am a movie fanatic, especially older movies. The past few weeks, I’ve seen several movies that deal with wartime romances which led me to thinking about Walther’s theories about hyperpersonal communication (1996).

In his description of the phenomenon, he talks about an idealized perception on the part of the communication partners as they fill in the information they don’t have about each other with the most positive assumptions possible. I wonder if two people have a relatively short time to get to know each other a similar process occurs. It logical that it might be a contributing factor. There may even be some deindividualization going on if the civilian partner (usually a female) sees and responds to the the uniform, the symbol o the fragility of the relationship as well as life, as opposed to the seeing the person who is wearing it.

Obviously Walther’s work deals specifically with cases where the communication partners are confined to communicating via CMC whereas the situation I am talking about the partners are co-located for the duration of this bonding process. I wonder though if the situation of people meeting when one of them is most likely (if not imminently) facing a danger creates an intensified sense of reality that leads to communication behaviors that are similar to what Walther describes.

I would be interested to read a couple of scholarly articles on the communication processes associated with called wartime romances.

(Some of the movies that feature this include, A Farewell to Arms, Waterloo Bridge, The man in the Grey Flannel Suit, The Best Years of Our Lives and many others.)

Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3-43. doi:10.1177/009365096023001001

Read Full Post »

Since I originally posted this back on March 14, 2012, I have been periodically checking online to see if this has been rolled out to other locales yet; I know that New York was supposed to be one of the cities they were going to roll this out to .  I have yet to see anything. I’m not sure whether it’s being kept quite to avoid attracting the kind of criticism they got in Texas.

I’m not sure how I feel about this. While it could cause people to interact with people who might otherwise become invisible, we also run the risk of dehumanizing them and viewing the “Wi-Fi homeless” as infrastructure. The best technological intentions usually have an unintended  dystopian element.  If this project is still going to roll out across the US, I’ll be watching it carefully to see how it goes.

Originally published on March 14, 2012

We have talked a couple of weeks ago about the question of whether mobile communication technologies refocusing people in public and semi-public spaces from being aware of other people they are sharing the space with (and hence, being available if the opportunity for serendipity to occur), to people focusing on their existing social networks who they are connected to via wireless technology.

This article describes an experiment that is being done in at the South by Southwest Festival. A group of homeless men and women have allowed themselves to be made into Wi-Fi hotspots for hire. The company is paying them a daily rate for being a hotspot and they are encouraged to charge users an hourly fee as well. It is not surprising that this idea has its boosters and detractors. I heard some people on the news paint it as victimizing and commodifying the homeless, stripping them of their dignity by reducing them to a mechanical device used to connect (comparatively) affluent people to other (comparatively) affluent people. Others think it’s a great idea and a way for someone who is homeless to earn money without begging or otherwise causing a public nuisance.

The PR firm that is doing this is talking about testing it out in NYC next so it they follow through on this, the next battle will be fought right next door to here.

What do I think? I think the answer will fall somewhere in the middle. I do think that there may be a few highly motivated homeless people who will be able to parley this into something that lifts them out of poverty. However, for the majority of folks who volunteer for this, I don’t think it will have a long or significant impact on their life, especially since (a) a significant portion of the homeless are struggling with addition issues (b) they will likely become targets of other people who will want to victimize them somehow to gain access or procession of the Wi-Fi device (heck, people have been assaulted for a pair of shoes).

I think at first people may be mindful of the homeless people they have to interface with to buy time. I have to wonder though that if like that barista at Starbucks, after a while people treat them more like payphones than people.

Links to news reports on this:

Read Full Post »