This was an engaging chapter and really made me think about computer-based design and all types of design for that matter in a new light. Let me list the six properties of design problems out as John M. Carroll has them on page 22.
Incomplete description of the problem to be addressed
Lack of guidance on possible design moves
The design goal or solution state cannot be known in advance
Trade-offs among many interdependent elements
Reliance on a diversity of knowledge and skills
Wide-ranging impacts on human activity
What I like about this list is that at first glance, most of them don’t seem to be negative points and the rest seem to be facts of life. But the first point is especially surprising. How often is a design rendered (no matter what the format) without clearly exploring the problem fully. Carroll’s analogy of the house shows how piecemealing problems is not a good way to an effective design. If you look at the house in terms of foundation, supports, and a roof, a bigger picture is present. But even these can be broken down further into water, cement, iron, wood, nails, shingles, more wood, etc. The result is that you might address the problem of users being able to search a database remotely, but since the problem is “solved” little attention is paid to the placement of the search, the construction of the database, or the navigation leading you to the search bar.
Carroll had another good example when talking about Dreyfuss and his approach to design. When looking to design a new typewriter, he spent a lot of time beforehand watching the people using the typewriters work. One of the things he discovered was that gue to the traditionally glossy black coating, the typewriters were reflecting a glare into the eyes of the workers from the overhead lights which would cause headaches. This touches a little on point six. The coating on a typewriter would have been put on it for aesthetic purposes. It did not serve any other purpose. But while it made for a better-looking typewriter, it did not make for a better tool.
Dreyfuss really is a good example of a good route to design. He began by discovering the problem. Perhaps problem can be a confusing word, maybe to say Dreyfuss always discovered the “Why” he should design and gained a thorough knowledge of the product from the point of view of the person who would be using it.
Yet often, designers who are not provided with a clear description of the problem that their design is going to address assume free creative reign over the planning of the project. I think that in a lot of ways, designer has come to be almost synonymous with artist. In fact, when dealing with IWSS to work on the Web site, I have frequently said, “I know you guys are the designers and do this every day, so you know better than me.” Thinking about this, it is really untrue.
They do not know better than me. I do not necessarily know better than them, but I do have a better understanding of my project than they do. Are they to be blamed for taking control of the project? Of course not. All too often we expect designers to be master planners as well as artists. In reality, they probably are busy focusing on the design itself and do not have the needed time to go out and search for the real problems. They rely on the clients initiating the project to discover, identify, and be on the constant look out for problems that will occur as the design trade-offs are happening.
One of the hard points to accept was that reliance on diversity and skills can be a bad thing…especially when we are always told otherwise. However, Carroll equates this to too many people in the kitchen. Good ideas will spawn from mass collaboration, but rather than being fully developed, they can be swallowed up in the constantly emerging good ideas. It seems like it is best to collaborate in moderation.
So really, these artists of today shouldn’t have a vision for the project until they know why they even have a project to do. Then again, perhaps the clients should be aware of the problems that are supposed to be addressed with a new design before they bring their project to the desgners.
Carroll concludes the chapter by telling us that the examples of the projects he used employ situations or scenarios that sought to remedy certain problems, but this is not scenario based design at its best. He promises that true scenario-based design can happen on purpose and not as a side product of discussion. Perhaps proper use of scenario based design can avoid these six frequent flaws in the design process.
Authors Selfe and Selfe present a position that computers in the classroom can be a type of inadvertent discrimination against certain groups of students. Underprivileged students who do not have computers often experience them differently than those who have grown up with them. Selfe offers ways in which the use of computers can discriminate historically, discursively, and from a feminist point of view. However, I thought that the authors brought up an interesting point in discussing capitalism and class privilege.
The authors talk about the metaphors often used in a system’s desktop interface. Elements such as files, folders, the recycle bin, and the fact that the desktop is in fact representative of a desktop is discriminatory against underrepresented groups who have grown outside of professional office elements which these metaphors serve. The authors suggest that there should be a variety of interface options so that various groups have an interface that is relative to their own class situation.
I understand where the authors are coming from on this, but I think that they are suggesting something that will not work. Suppose there is a workbench option available as a desktop interface (though I know that some desktop themes reminiscent of this can be downloaded) readily available with the computer. It is simple to customize the overall look of the desktop through the use of a wallpaper or screen saver. Even Gmail and Yahoo! accounts can be customized to an extent. However, the metaphors that have been established are used in more than the “look” of our desktop interface. Programming relies on such metaphors to explain processes and link files. If we were to go with the work bench metaphor and call “folders” “project shelves” and “files” “projects” a whole new method of instruction for higher computer processes must be developed along with those new metaphors. In essence, a lower grade of programming an computer use is created. And isn’t it discriminatory to identify a group by selling them the “Workbench” version of Windows? It almost implies that they are incapable of being comfortable in an environment based on professional office metaphor. This kind of implication would further widen the knowledge gap that the authors are exposing.
I think that basing the computer experience in multiple languages is the first major step towards eliminating this barrier. Customization of interfaces is an ineffective idea because it will only serve to identify those who are brought up outside of the metaphor and alienate them from that metaphor.
In a book on Web standards, it was inevitable that the focus would eventually fall upon accessibility for the Web. Zeldman presents figures and support for why accessibility is important. However, in all of the books I have read so far that talk about this subject, I think in every case it is the easiest sells. Besides complying with the law, designing with standards for accessibility is a good way to promote good will amongst customers and not alienate a base of users who are disabled in some way.
I think that one of the main points of why people do avoid adhering to accessibility standards was neglected by Zeldman: Simply, the average person may assume that someone who is blind, hard of hearing, or impaired in some other way will not use the Internet and thus there is little need to accommodate the few who might try. On the contrary, tools such as screen readers and alternate CSS coding can allow someone who is hard of hearing, blind, or color blind enjoy almost the same user experience as others.
This chapter served as a reminder that these audiences should not be neglected. Zeldman outlines several ways that they can be accommodated without designing an alternative version of the site. Chapter 14 for more detailed references.
I thought Zeldman provided an interesting history of tags for Quicktime and Flash in chapter 13. It also serves as an example on how when people refuse to accept standards that are created for the good of the Web, the standards must evolve before they become the problem. The W3C wanted the <object> tag to become the standard for adding images and video type objects into Web sites. This would allow of course for one tag to encompass most situations. However, those still clinging to old HTML went with <image> instead of <object>. When the video formats emerged for Web sites, since the <image> tag would not encompass these videos and allow them to function properly, the <embed> tag has been created and implemented.
We can see the <embed> tag as popular in networking sites such as Facebook, MySpace, and other popular HTML sites. Even to those with little programming knowledge, it is common knowledge that looking for the <embed> code and adding it to an appropriate section of a page will place the video. The result of the popularity of these tags was that some browsers had trouble recognizing the <object> tag at all. Despite W3C’s attempts to block the alternative tags by excluding them from the lists of tags published, they still held their ground.
What W3C ultimately did to react to the situation impressed me. Seeing that their standard, while it would arguably have eased the placement of video files and images on Web pages, was causing trouble due to lack of acceptance, they added the <embed> and <image> tags as options within standards. I imagine that the committee is still pressing and hoping that <object> will one day be accepted as the norm, but in bending to fit a temporarily unwinnable situation, they retain their credibility as the go-to source for Web standards. Can you imagine what it would have done to their credibility if they continued to stubbornly stick by a standard for such common code that didn’t work?
Zeldman further elaborates on work-arounds by not damning them, but acknowledging them as necessary since all browsers and all Web sites are not created perfectly.
I wanted to make quick post so that my readers (likely only Dr. Kalmbach) know I have been moving steadily through Zeldman. However, the nature of the book as far as tagging goes and design standards does not seem largely debatable, so posting content is scarce. I will say that Zeldman made a compelling argument for the use of XML over HTML. In a lot of ways it made me think of the standards I use for editing publications at work for print. There are a lot of ways they could be edited, but having one established style allows for greater consistency. Likewise one of the perks of the style is the tagging, whether to close them, open them, or specify document type before beginning a document.
I may revisit the sections on CSS once I begin my design of my own Web site. I have increased confidence in O’Reilly though as Zeldman also acknowledges him in his book as one of the best sources for CSS design and strategy. I will wrap up the book and proceed to the article recommended during our last session. Hopefully I will be able to post some other thoughts on Zeldman before the conclusion of the book.
So far Zeldman makes a strong argument for accessibility. I am referring to accessibility to those who do not have the latest computer and those who are considered disabled. He had a good example on pages 53-55 of how a Web page might read to someone with limited capabilities viewing the page in HTML. Each example for a varied standards situation presents text muddled with tags and coding. Where Zeldman advocates that the <h1> tag should be an indicator of an important headline, many use it to encode a particular typeface, size, or style. This in turn has negative consequences on the text.
It would seem in fact that the best way to create a Web page would be to focus on the three elements: structure, presentation, and behavior, and to use specific programming languages for the various aspects. For example CSS1 and CSS2 is ideal for presentation because it is easily altered as visual design while the actual structure of page elements might be best represented with HTML, XHTML, or XML.
Despite strides people like Zeldman have made in implementing Web standards, design programs and assistance in operating becoming readily available to the consumer inhibits design standards from being observed in smaller sites. This stems from the lack of user/consumer friendly language for the standards themselves.
Fortunately for us, the major companies such as IE, Mozilla, and Microsoft have adopted the WaSP standards of=ver the past few years and this allows most Web sites to still operate. In fact, I have to think consciously to remember the days when particular pages would work only in specific browsers. However, with the constant evolution of design progarms and language, perhaps another mini era could emerge again when browsers must play catch-up on the standards that need to be implemented to create a smooth experience for users.
I haven’t had many comments regarding the last portion of the book. Sections have included topics such as building an IA team, making the case for IA, and centralization of IA for enterprises. My thoughts have all been in agreement as I have not encountered opportunities to build a specific IA team. The logic behind the IA “dreamteam” makes a fair amount of sense. What has struck me as odd is a brief comment that has occurred in making the case and centralization.
The authors seem to insist that building an IA for a large Web site should happen over almost a year…that’s great, and all, but I wonder how they would react to the way things are done on Earth. I wholeheartedly believe that the fastest way to kill a case for the need for an information architect would be to say that not only does the company to hire another set of employees or a consulting firm, but it will have to hire the group for over a year before the product they wanted is produced. Ideally it would be great, and after reading the book, I see the benefit to taking a significant amount of time to carefully plan the structure of a site. However, I believe that this timeline is an impossible sell.
Also, the authors don’t address the fact that in a year’s time, the entire structure of the company itself can dramatically change. An architecture that has been in the planning stage for six months could become total garbage with a single board meeting reorganizing the company, introducing a new product line, or discontinuing major services. All realistic possibilities.
Anyway, I suppose this surprised me more than it should have because the authors have regularly acknowledged that it is a tough sell, especially for ROI thinkers, but would also seem that way to “gut thinkers.” Unfortunately, most companies are only going to try to build a new site when theirs is horrendously out-of-date. When the decision is finally made to redesign, the last thing people will want to do is wait a year before they can have their finished product.
- Crossing the Digital Divide
- Mouse pad rhetoric
- Sarah Palin rhetoric
- Chapter 9 Postmodernism, Indie Media, and Popular Culture
- Convergence Culture
- Digitizing Race and the Matrix
- Modules in video games
- Weather channel
- Figure/ground, framing, and grids
- Grids, layers, hierarchy, transparency, modularity, patterns
- Readings on disability
- Rhetoric of Walls