Tuesday, November 30, 2010

Week 13 Blog Comments

I commented on the following blogs this week:

http://att16.blogspot.com/2010/11/discussion-topic-online-privacy.html?showComment=1291164616353#c6602191237236722480

http://kel2600.blogspot.com/2010/11/reading-notes.html?showComment=1291165063312#c3105076267586106684

Week 12 Muddiest Point

I have no experience using Facebook or MySpace, thus my question:  I know a lot of businesses and organizations have Facebook pages, and they tell you to go to their Facebook page for more information.  What do you get from a Facebook or MySpace page that you can't get from that organization's Web page?  I'm thinking of the slide from this week of ALA's MySpace page. 

Tuesday, November 23, 2010

Week 12 Readings

Allan's article on using wikis to manage library instruction programs seems like a no-brainer to me.  He cites two advantages to library instruction wikis:  sharing knowledge and cooperating in creating resources. Having taken the Library Instruction course here in LIS (which I'd highly recommend, actually), I have a new appreciation for the work that goes into lesson plans and handouts.  Why not share these materials to take advantage of individuals' areas of expertise and to help distribute the workload?  The article offers Web sites for a few commercial sites for creating your own wiki--I don't know if these are still around, though, after three years.  

Social tagging is interesting to me, in that it allows the user to create his or her own subject headings for things.  But I'm on the fence about whether it's a good idea for librarians to "give up the reins," as Arch puts it, in controlling vocabulary.  Based on my own experience using del.icio.us, I can appreciate the problem of variation in tags--I never remember if I've run all the words in a tag-phrase together, or if I've separated them with an underscore, or if I've abbreviated a term, or used caps or lowercase--my own tags are a mess! I can see where a folksonomy created by library patrons, if left to their own devices, could create a kind of chaos!  (Loved the term "spagging" for "spam tagging"!)

I enjoyed reading the article on "Weblogs"  for its background information and history.  Having spent the term contributing to my own blog, I could really understand the pros--fostering collaboration, focus, and communication; simplicity; practicality, for its time-stamping and organized trail of discussion--and cons--the reader has to go to the blog (unless he or she signs up for notification or RSS feeds.  I can certainly see how blogs could be used in the library--either among coworkers, as a way to facilitate training, or among patrons, as an extension of reference services.  I've actually really enjoyed keeping my blog throughout the term.  I'm just not sure the world needs me to keep mine up after this term is over given the millions that are already out there!  :)

Jimmy Wales on the Birth of Wikipedia was a very informative video.  I learned a lot about this Web site that I use every day.  It's operated by thousands of volunteers, funded by donations, has over 600,000 articles in English (and that's as of 2005), was a "Top 50 Website" that was bigger than the New York Times.  His "chaotic" model costs about $5000 a month for bandwidth.  I liked how he said, "It isn't perfect, but it's better than you'd expect," and I guess that's how I've come to view Wikipedia--as a starting point. Wales discusses how the Wikipedia Foundation handles controversies, how they handle quality control, what types of software tools they use, and how they are governed.  I thought it was very interesting at the end, when he said that teachers are finally beginning to use Wikipedia, and that he sees free-license textbooks as the next big thing in
education.   

Week 11 Muddiest Point

So, say I have a Web page.  Is my Web page indexed only when someone else's Web page has a link to mine? 

Friday, November 19, 2010

Week 11 Blog Comments

I commented on the following blogs this week:

http://maj66.blogspot.com/2010/11/week-11-readings.html?showComment=1290178682446#c3744921410295206481

http://marclis2600.blogspot.com/2010/11/readings_18.html?showComment=1290179981314#c7974591853663616894

Week 11 Readings

The Hawking articles bring to light the "miracle" that is Web crawling and searching.  It's really hard for me to grasp such incredibly huge numbers when we're talking about data structures containing billions of URLs and hundreds of terabytes of data (and that's in 2006!).  But suffice it to say that I'm amazed by the design of crawling algorithms that manage to index what we know as the Web.  Further, that these crawling machines "know" which URL's they're responsible for, and which to pass on to another machine; that they "mind their manners" with regard to waiting their turn to enter a request to a server so as not to overload it; that they are able to recognize duplicate content; and that they're able to detect and reject spamming is all rather like sci-fi to me. 

I expected Bergman's article on the Deep Web to be daunting, if only because of its length, but I found it surprisingly accessible and interesting.  The bottom line:  there's a huge amount--like, 500 times--of higher-quality information buried beneath the "Surface Web."  And 95% of that Deep Web information is available for free, if only we could get to it!  I have to admit, in the early pages of this White Paper, I was thinking, "But how many of us really need such comprehensive information when we search?  Isn't Google good enough?"  Clearly I was one of those folks who didn't know there was better content to be found! Quality, however, is all in the eye of the beholder:  As Bergman says, "If you get the results you desire, that is high quality; if you don't, there is no quality at all" (p. 23).  Google, in fact, MAY be "good enough" for most users; the problem, I think, with using conventional search engines is that people don't know the right search terms to use (just like they don't know the right questions to ask during a reference interview).  The Deep Web is apparently where one should go for quality information, and "when it is imperative to find a 'needle in a haystack'" (p. 25). I think librarians definitely need to know about the Deep Web--the threefold improved likelihood of obtaining quality results from the Deep Web vs. the Surface Web is convincing.  The thing that worries me is that the article states, "The invisible portion of the Web will continue to grow exponentially before the tools to uncover the hidden Web are ready for general use" (12).  So if the best information is in the Deep Web, then what are we "finding" from the Surface Web for patrons/users???  Yikes, people are making medical, financial, legal, etc. decisions based on Google searches????

The Open Archives Initiative (OAI) may be the next-best way to find quality information in the meantime, while we wait for the Deep Web to become accessible. Different communities (libraries, museums, and archives) have adopted the OAI protocol to provide access to their resources.  (I personally found the Sheet Music Consortium to be a fascinating project!)  Shreeves et al. discuss the shortcomings of OAI registries (completeness and discoverability), and improvements the group feels should be made to the registry in the future.  

Thursday, November 18, 2010

Week 10 Muddiest Point

What happens if a person at one institution deposits his work in that institution's digital repository, but then decides to go to another institution?  Can he withdraw his materials out of the first IR and take them with him?  Or does that first institution have rights to his work, for all time?

Wednesday, November 10, 2010

Week 10 Blog Comments

I commented on the following blogs this week:

http://elviaarroyo.blogspot.com/2010/11/unit-10-digital-libraries-and.html?showComment=1289442109956#c3118048193901556948

http://lostscribe459.blogspot.com/2010/11/week-10-reading-assignments.html?showComment=1289443407751#c3067021766759313095

Week 10 Readings

So it’s Week 10, and FINALLY, I’m seeing “the method in the madness” of the preceding weeks.  For without even a superficial knowledge of hardware, software, databases, metadata, open source, storage, networks, WWW, HTML, and XML, we would not be able to have a discussion about the burgeoning digital library services we’re reading about this week.  Clearly, computer technology and library science have had to merge disciplines in order to create the DLI-1 and 2 projects discussed in Mischo’s “Digital Libraries” article.  The collaboration between computer scientists and librarians less than twenty years ago yielded powerful (re)search tools and allowed access to these resources to many people.  Once again, I see how important it is for librarians to understand technology, for THEY are part of the research team that creates these services.

(All I could think of while reading the “Dewey Meets Turing” article was that Google video we watched back in Week 7, and how the company encourages its employees to focus a few hours of each work day on some problem they’re interested in.  This article couldn’t sum up that philosophy better:  “Digital library projects were for many computer scientists the perfect relief from the tension between conducting ‘pure’ research and impacting day-to-day society.”)  So computer scientists had a set of expectations for DLI, and librarians had a set, but the Web “undermined the common ground that had brought the two disciplines together.”  Still, they shared common values, that of “predictable, repeatable collection access and retrieval.”  While technology lays the foundation for these values, this article encourages librarians to broaden their technology horizons so they can contribute to this emerging field. 

Lynch’s article about institutional repositories addresses the value of making available and preserving intellectual life and scholarship.  But he also worries that too strict or too complicated policies about what may be included in such a repository may turn some institutions away from the idea.  Other concerns he has are preserving digital files in formats that may not be accessible in years to come, ensuring “persistent reference to materials” even as they go through different versions, and recording and documenting rights and permissions of works deposited in institutional repositories.  He makes a strong case for thoughtful and careful consideration, and consultation and collaboration with faculty and librarians before rushing into preserving intellectual work.

Week 9 Muddiest Point

I have no muddiest point this week.  It's all clear as mud.  :)

Thursday, November 4, 2010

Week 9 Blog Comments

I commented on the following blogs this week:

http://skdhuth.blogspot.com/2010/11/unit-9-notes.html?showComment=1288898921005#c8340211425439703522

http://jonas4444.blogspot.com/2010/11/reading-notes-for-week-9.html?showComment=1288899550219#c4403626588150894755

Week 9 Readings

Wow, all the readings this week are really way over my head.  I’m hoping that some of this stuff becomes understandable once I start applying it to creating my Web page, but for now, I have to say that I really don’t follow much of it at all, and the readings only confounded me rather than made any of it clearer.  There’s some hope for the w3schools XML Schema Tutorial, but I can’t really grasp this stuff without actually doing it (why didn’t they have a “Try It Yourself” option, I wonder?).  Ooh, this is all starting to worry me.  I’m not a programmer, by any means.  And this is looking a LOT like programming.

In short:

I have a feeling I didn't read the right Bryan article. . . . The BURKS document I read (from the link in CourseWeb) left me thinking, "Well, it was nice while it lasted," but I guess it just wasn’t relevant anymore, being nine years out of date, and with new versions of software and cheap Internet access.  I guess it shows what some were willing to do for those who did not have computing references and resources readily available to them.

The Survey of XML Standards, Part 1, briefly explains what the author defines as the most important core XML standards (and the organizations involved that set those standards).  Ogbuji explains that XML is based on SGML, that it is simpler than SGML (HA!), and that it is better suited to the Web environment.  The remainder of the article explains different systems, data sets, data models, and languages that all can be used to affect the structure of a document.  He provides useful references and resources throughout. 

“Extending Your Markup” began, for me, as a very promising informative tutorial on XML language.  In the first section alone, I learned three main points:

•    SGML lets you define structure for documents
•    HTML is primarily used for layout on the Web
•    XML (Extensible Markup Language) lets you annotate text

(Those bullet points are from the first page of Bergholz’s article.)  First of all, I never knew what the “X” stood for in XML.  The examples in Figures 1a and 1b were helpful, in that 1b was clearly easier to read and understand.  But I’m afraid after that, the author lost me. 

I’m sure I will come to rely on the w3schools “XML Schema Tutorial.”  I was thinking “Why do I have to know this stuff” until I read the following in one of the chapters, “Even if documents are well-formed they can still contain errors, and those errors can have serious consequences.  Think of the following situation: you order 5 gross of laser printers, instead of 5 laser printers. With XML Schemas, most of these errors can be caught by your validating software.”  Hmmm.  Okay.  That's pretty impressive.  I guess I can see why I might want to know this stuff. . . .

Week 8 Muddiest Point

I understand that I need to create a Web page in Notepad.  But how do I then get that text on the Web?  Jiepu kept clicking on a little Mozilla Firefox icon in his ribbon at the bottom of the screen, but I don’t have a ribbon like that, and I can’t figure out how to get my HTML text into a Web page.  (Sorry--but I'm really new to Web design and all of this. . . .)

Monday, November 1, 2010

Assignment 5 - Koha

Here is the link to my Koha list:

http://upitt01-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=49

My username is HAVRAN.
My list name is magpie2600.

My list includes some great books I’ve read recently, and a few that I’m looking forward to reading some day.