Tuesday, November 25, 2008

Unit 13 Readings

UTube Video was taken down due to a copyright violation.

"EPIC - TIA"
  • Latest News: EPIC has urged for screening of the Department of Homeland Security's proposed Office of Screening Coordination and Operation because it would oversee vast databases of digital fingerprints and photographs, eye scans and personal information from American citizens and legal foreign visitors - but this office does not propose how it is going to promote privacy rights
  • In 2002, DARPA founded TIA - intended to detect terrorists through analyzing tons of information
  • Worked through projects such as Project Genoa and the Human ID at a Distance Program - main goal of all its projects: to build a "virtual, centralized, grand database"
  • 2003 - Congress eliminated the funding for the controversial project

"No Place to Hide"

(The link on courseweb gave me an error that it could not find the page - so here are two websites that I used instead: A review of the book, http://www.simonsays.com/content/book.cfm?sid=33&pid=503479 and a site about information dealing with the book, http://www.noplacetohide.net/ )

From the book review...

  • A book written by Robert O'Harrow Jr.
  • Gives details of how private data and technology companies and the government are creating an industrial complex or a national intelligence infrastructure.
  • Explains how the government depends on a large reservoir of information about aspects of everyone lives to promote homeland security and fight terrorists
  • The example of the grocery discount card is unsettling
  • The book explores the impact of this new security system on our privacy, autonomy, liberties, and traditions

From the "No Place to Hide" Site

  • Gives all kinds of updates on new information pertaining to what No Place to Hide brings out - privacy with information issues
  • Gives a list of people who are/were responsible for such dramatic acts that question our liberties - such as the Patriot Act
  • Allows you to read the final chapter - chapter 10 of No Place to Hide

Tuesday, November 18, 2008

Unit 12

"Using a Wiki to Manage a Library Instruction Program"

By using a wiki librarian instructors will be more informed because they can share, facilitate, and collaborate more easily with one another for resources and classroom ideas/materials. Using wikis for library instruction has two main uses: the sharing of knowledge and the ability to cooperate in creating resources. An example is at the Charles C. Sherrod Library at East Tennessee State University, the library instruction program has a wiki to share knowledge (such as unforeseen assignment directions that weren't clear) and to collaborate resources (place their pre-existing handouts on the wiki to reflect the new information given out).

"Creating the Academic Library Folksonomy"

I thought this article was interesting because it gave us an idea of del.icio.us outside our own experience (of using citeulike for an assignment and the hands on point for del.icio.us). Uses the site such as Citeulike to talk about "bookmarking" pages on the Internet, so you don't lose your bookmarks of pages when you move from one computer to the other. "Such sites allow users to share these tags and discover new Internet resources through common subject headings...called a folksonomy - a taxonomy created by ordinary folks. If applied to library use, librarians could create pages for students to refer to "good," solid academic articles that they may have trouble finding. I was eager to start my own site and possible when I do a field placement at a public library to introduce the idea as a possible project.

"Weblogs"


  • A weblog is a blog which is a website that resembles a personal journal that is updated with individual entries or postings
  • A distinguished feature of blogs is its automatically archiving entries
  • They cover almost anything - a big trend is that people are covering current issues that are academic in nature which is causing a shift in how some groups communicate
  • Blogs are so popular because of the simple "out-of-the-box" concept which allow for the creation and maintenance of blogs. There are many different kinds from the free and simple blogs to the "robust packages."
  • Librarians can use blogs for projects, training, scheduling, reference, and team management. Librarians can also encourage students to use blogs for their assignments or projects

"Jimmy Wales: How a Ragtag Band Created Wikipedia"

  • Wikipedia (WP) was a radical idea
  • How it works from the inside: Wiki is a NPO, goal is to get Wikipedia to everyone in order to help them make better informed decisions; but this means more than just through the Internet and this is why they chose the free licensing model so that people could copy it and distribute it
  • Cost - 5,000 dollars for bandwidth a year plus one employee, Brian who is the software developer
  • Quality - not perfect, there are weaknesses, but much better quality than you would expect
  • How do they manage quality? Mostly social policies, especially with the rule of neutrality which is they don't talk about the "truth" or "objectivity" - get a lot more work done this way when they don't mill around about controversial issues with a topic
  • When something changes - creates a change page which can be changed back to the original (such as in case of vandalism)
  • Jim needs some control, but he sticks to the rules (and so do the other people who are voted into powerful positions in the Wiki community)
  • Stated (by Jim) that there is a wide-held belief that all academic and teachers hate it, he says it is not true - but I think there is an overwhelming amount of evidence that it is still so at many colleges
  • Wiki People Project - take 20 years or so - give people materials that they can use and understand all over the world, in all kinds of social positions and learning levels (you can't give someone who never had the chance to go to high school an encyclopedia written for college level and expect them to get something out of it)

Mudiest Point - This has to do with this weeks hands on assignment. What is the big difference (in academic opinion) between CiteULike and Del.iou.us?

Wednesday, November 12, 2008

Unit 11 Readings

"Dewey Meets Turing"

  • A more intriguing aspect of DLI was its ability to unite librarians and computer scientists
  • Computers scientists could see how the DLI project could help current library functions move forward
  • And libraries saw the project as a way to infuse needed funds
  • Libraries also saw that information technologies were important in order for them to have an impact on scholarly work and it would raise the expertise of the library community that had not been seen before
  • The Web changed many of the DLI projects plans - the web blurred the distinction between consumers and producers of information which undermined the common ground that had brought computer scientists and librarians together
  • For librarians, the web was much more difficult to integrate into their work
  • Against some predictions (of the WWW), the core functions of librarianship still remain today because information still needs to be organized, collated, and presented
  • The accomplishment of the DLI have broadened opportunities for LS, rather than marginalizing the field

"Digital Libraries"

  • There is a big difference between providing access to discrete sets of digital collections and providing digital services
  • information providers designed enhanced gateway and navigation services to address these concerns
  • Aggregate - Virtually collocate - federate = mantra for digital library projects
  • The offshoot of the DLI project started things such as - Google and the OAI-PMH
"Institutional Repositories"

  • Institutional Repository (IR)- a set of services that a university offers to the community for the management and dissemination of digital materials created by the institution and its community members
  • Represents a collaboration among librarians, IT, archives and record managers, faculty, and university administrators and policymakers
  • An essential preequisite is preservability - must be able to claim it permanent to the collection
  • Unclear of what commitments are being made to preserve supplemental materials - but they should be part of the record
  • The IR is a completement and a supplement for traditional publication venues
  • IR allow dissemination of materials and encourage the exploration of new forms of digital and scholarly communication
  • Need to guaruntee both short and long term accessibility - need to continue to use new accessible forms of technology (update often)
  • Cautions: IR used as tools of control over intellectual work, IR information overload, and hastily made IR without much real institutional commitment to the process because they want to get the project done quick

Muddiest Point: The IR article talks about federation as just being in the infant stages, and the "Dewey Meets Turing" article talks about federate being a part of its mantra - where is the ability to federate or the federation of digital materials today in the digital library realm?

Monday, November 10, 2008

Assignment Number 6

My Website:

http://littleelmquist.synthasite.com/

I had a few problems with the website - it wouldn't let me edit My 2600 page - so things like "easy acessible" should be "easily accessible" and "to" should be "two," but it gives me some errors and won't allow me to touch it.

I didn't have a hard time loading it into FileZilla, but just like it won't let me view the sample webpage on our Assignment sheet, it won't let me view my own - telling me that I do not have premission to access the site from Pitt.

Wednesday, November 5, 2008

Unit 10 Readings

"Web Searching Engines: Part 1"



  • Search engines crawl and index around 400 terabytes of data
  • A full crawl would saturate a 10-Gbps network link for more than 10 days
  • Simplest crawling algorithm uses a queue of URLS and initializes the queue with one or more "seed" URLs
  • But, this simple crawling method could only fetch 86,400 pages per day ( would take 634 years to crawl 20 billion pages) - crawler parallelism is one solution (but it still is not sufficient to achieve the necessary crawling rate and it could bombard web servers and cause them to become overloaded)
  • To increase crawling's success - a priority queue replaces the simple queue
  • Ranking depends heavily upon link information

"Web Searching Engines: Part 2"

  • Search engines use an inverted file to rapidly identify indexing terms
  • An inverted file is a concatenation of the postings lists for each distinct term
  • Indexers create inverted files in two phases - scanning, and inversion
  • For high quality rankings - indexers store additional info in the postings
  • Search engines can reduce demands on disk space and memory by using compression algorithms
  • PageRank - assigns different weights to links depending on the source's page rank
  • Most search engines use a combination of queries such as link popularity, spam score, how many clicks...ect
  • To speed crawling up: skipping (such as words like and, the); early termination; assigning documents numbers; and caching.

"Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting"

  • Initial release in 2001 and initially developed as a means to federate access to diverse e-print archives through metadata harvesting and aggregation
  • Mission is to "develop and promote interoperability standards that aim to facilitate the efficient dissemination of content."
  • Others have begun to use the protocol to aggregate metadata relative to their needs
  • Provides users with browsing and searching capabilities as well as accessibility to machine processing
  • OAI is divided into: data providers, repositories, and service providers
  • EXs: Sheet Music Consortium and National Science Digital Library (NSDL has the broadest vision of the OAI service)
  • Issues: Completeness, Searchability and Browsable, and Amendable Machine Processing
  • Ongoing Challenges: variations and problems with data provider implementation and on the metadata; the third - lack of communication among service and data providers
  • (interesting connection to the Dublin Core - controlled vocabularies will be more important as providers try to cope with all the data)

"The Deep Web"

  • Traditional search engines do not probe beneath the surface - the deep web is hidden
  • Deep Web sources store their content in searchable databases that only produce results dynamically in a response to a direct request - but this is a laborious way to search
  • BrightPlanet's search technology automates the process of making dozen of direct queries using multiple-thread technology
  • Only search technology so far that is capable of identifying, retrieving, qualifying, classifying, and organizing both "deep" and "surface" content - described as "directed-query engine"
  • The deep web is about 500 times larger than the surface web and 97.4% of deep websites are publicly available without restriction; but they receive about half of the traffic as a surface website
  • Deep websites tend to return about 10% more documents than surface websites (sw) and nearly triple the quality of sw
  • Quality = both the quality of the search and ability to cover the subject requested
  • Most important finding: large amount of meaningful content not discoverable with conventional search technology and a lack of awareness of this content
  • If deep websites were easily searchable users could make better judgements about the information

Muddiest Point: Why hasn't the deep web come into light sooner? Why haven't IS and other tech savy students been using it before? Are their problems with BrightPlanets search engine? The last article does not seem to address any problems of deep search engines.

Wednesday, October 15, 2008

Unit 9 Notes

"An Introduction to the Extensible Markup Language (XML)"

I tried to focus on what XML is and is not - in hope to distinguish it from other things that we are learning about dealing with computer language...
  • XML is a subset of the SGML which is designed to maker it easier to interchange structured documents over the Internet
  • Unlike SGML, XML does not require the presence of a DTD which means that XML system can assign a default definition for undeclared components of a markup
  • XML was not designed to be a standardized way of coding text - instead, it is a formal language that can be used to pass info of component parts of a doc to other computer systems.
  • XML differs from other markup languages because it sets out to clearly identify the boundaries of every part of a doc
  • XML-coded files are ideal for storing in databases because they are object-orientated and hierarchical in nature - meaning that they can adopt to any type of database = ensure transferability to a many types of hardware and software environments
"Extending Your Markup: An XML Tutorial"

For some reason I did not find this article as helpful as the above article, and actually... it made me more confused about XML after reading this, than I was after reading the previous article...
  • XML = semantic language that lets you meaningfully annotate text
  • XML syntax starts with a prolog and contains one element
  • Elements can be either nonterminal or terminal
  • Elements have 0 or more attributes and attributes can have different data types
  • There are XML extensions which include namespaces which allow more powerful addressing and linking abilities
  • DDML, DCD, SOX, and DCD, address several disadvantages with XML documents
  • Developments to watch = RDF (resource description framework) and DOM (document object model)
"Survey of XML Standards: Part 1"

I found this article very confusing - because XML was suppose to be something simple... but with all these "technologies" which seem so alike... I'm not convinced that this is so.
  • XML - vast and growing with a huge variety of standards and technologies that interact in complex ways
  • Outlines the most important XML technologies - all are standard
  • XML 1.0 to XML 1.1 which is the first revision of XML that changes the definition of a well-formed XML document (doc or docs)
  • XML 1.1 also revises the treatment of characters and adds to the list of line-end
  • XML Namespaces (also brought up in "Extending your Markup") which is a mechanism for universal naming of elements and attributes in XML docs
  • XML Base - means of associating XML elements with URIs in order to specify how relative URIs are resolved in relevant XML processing actions
  • XInclude (XML Inclusion) provides a system for merging XML docs
  • XML InfoSet (Information Set) defines an abstract way of describing an XML doc as information sets, with specialized properties
  • Canonical XML - generates a physical representation of an XML doc that accounts for the variations allowed in XML syntax without changing the meaning
  • XPath (XML Path Language) is a syntax and a data model for addressing parts of an XML doc and is also the most successful XML technology (beside the XML 1.0)
  • XPointer (X Pointer Framework) defines a language that can be used to refer to fragments of an XML doc
  • XLink (XML Linking Language) provides a generic framework for expressing links in XML docs and allows much richer linking than one-way HTML
  • Relax NG - an XML schema langage that can be used to define a limit XML vocabularies
  • W3C XML Schema defines (yet another) schema language for XML. One part allows to constrain the structure of the document and the second part allows to constrain the contents of a simple element and attributes
  • Schematron - Schema language that uses a different approach than DTD, Relax NG, or WXS. You register a collection of rules against which the XML doc is to be checked, rather than mapping out the entire tree structure of the XML format you are trying to express from root node to the leaves.
"XML Schema Tutorial"

Again, probably the most helpful article out of them all again. I really like the way this website approaches tutoring those who have no idea what they are doing with languages. It says all the basic stuff that the first two articles said, but also gives better explanations and just seems to display better what they are talking about.

Muddiest Point: How can XML system assign a default definition for undeclared components of the markup but SGML cannot (wouldn't they just create a fix for that in SGML?)

Assignment 5: Collection Building in Koha ILS

My library collection is called "Public Libraries" and the URL is:

http://pitt4.kohawc.liblime.com/cgi-bin/koha/bookshelves/shelves.pl

Wednesday, October 8, 2008

Week/Unit 8 Readings

HTML Tutorial and CSS Tutorial

I found both of these websites extremely helpful, although at first, after reading the HTML one, I was a little bit confused by the CSS Tutorial. The more I read about CSS, the more I understood how it can help HTML (at first I thought it was a whole new thing). I think CSS is a good compliment to HTML use, especially when you want to add more character or personality to your webpage. Overall, I was glad to finally learn how to do the simple things of making a webpage that I have always wondered about.

HTML Cheatsheet
I found the HTML tutorial more helpful since it gave examples and went more in depth about how to start a web page. This would be a good sheet to use once you have read the HTML tutorial and need a quick reference back to what you read when you are building a web page.

Beyond HTML

I never heard of CMS - I am sure I have probably used it before without knowing it, but it seemed to have made common sense for GSU to switch over to CMS. The problem with GSU library websites was that it was not standardized - librarians were publishing their own work, with their own standards, and it seemed to have confused a lot of people (plus wasted a lot of time doing their own formatting). The quality and consistency of the library guides was not as professional as GSU wished it to be. To switch over to CMS solved many of these problems.

What makes CMS so useful to librarians is that "content is disconnected from the layout and design element of the page... instead of devoting time with HTML or FrontPage to create the structural or presentational display... the librarians can focus instead on identifying, creating, annotating, and selecting the content itself." So CMS does all of the formatting which gives it a standard look, and a standard for quality and consistency. Another helpful tool of CMS is that it provides just enough control - it allows creators to have more direct editorial access to their assigned components as well as acting as a "limited gatekeeper." Another interesting aspect of CMS is that once a material is used, say a book, it is available for all other assigned user to use that material. So if you already put it in once, it doesn't have to be put in again because it can be re purposed or repackaged. Finally, as the GSU library had problems with pages accidentally being deleted, CMS did not have this problem.

The library did a study on the switchover from HTML and FrontPage to CMS - and overall it was a successful switchover with most people agreeing that CMS helped unify the library's online content while also allowing user to experience ease of use.

Muddiest Point: What does the Pitt library use? And has the Pitt library experienced a switchover in the past? If so, what were their results?

Sunday, October 5, 2008

Week 7 Comments

Comment #1

https://www.blogger.com/comment.g?blogID=6994306389856188940&postID=5042425434173258059&page=1

Comment #2

https://www.blogger.com/comment.g?blogID=2759599872455292147&postID=584705319082260696&page=1

Wednesday, October 1, 2008

Discussion on RFID

I posted a short response to the use of RFID in libraries on the Discussion Board in Courseweb.

Week 7 Readings

Google Video
The first map they showed was pretty cool, I saw it somewhere else, but I completely forgot about it. It was really neat to see where the activity was coming from and also where Google hoped to get more activity from (such as Africa). Another neat thing I learned from this video was the Google Foundation - and how they want to make the world a better place, whether it is through health, education, or even protecting animals.

Google seems like such a "small" company in their approach at the office and towards their coworkers. I found it so interesting how they adopted the 20% rule in which they allow their workers to do something that is important to them - such as project Orkut. It was also neat to hear more about what they do for their employees and how they are trying to make work a more relaxing environment (such as the dog).

Finally, when he mentioned about the greatest things about Google: that they make their money from advertising which means that everyone, even poor people, around the world can use Google for free rather than just a wealthy nation having the tool, it really made Google seem like it cares about people and spreading information (rather than hoarding).

Overall, I found this video extremely interesting to see more of a personal side about Google rather than just the non-humanlike search bar and results.

"How Internet Infrastructure Works"

I found this article somewhat interesting - there were many things that I learned, although a few things confused me.
  • ISP - when you connect to your ISP, you become part of the network in which the ISP then connects to a larger network = Internet is a network of networks
  • POP - place for local users to access the company's network - no overall controlling network, but high level connection networks through NAPs
  • Importance of backbones and routers: routers, make sure that info goes where it is suppose to go and that information makes the intended destination; backbones are fiber optic trunk lines
  • So neat - that the info can travel halfway across the world through several different networks in a fraction of a second
  • Every machine using the Internet has an IP address = language that computers use to communicate over the Internet (a pre-defined way)
  • URL - contains the domain name which = DNS servers translate the human-readable domain name into the machine readable IP address
  • Redundancy - one key to making the DNS and IP work
  • Internet servers make the Internet possible - because all machines are either servers or clients and a server machines makes it service available to a user (client)
"Dismantling Integrated Library Systems"
Some interesting points, and things I learned from this article:
  • The web creates opportunities, challenges, and expectations that are changing ILS
  • Old systems tend to be inflexible and nonextensible
  • New expectation - that new modules will communicate with old ones, and that vendors, libraries, and ILS can all work together by the new models - BUT interoperability is not happening, more a myth than a reality
  • Creating new ILS is unrealistic and besides, it doesn't fit libraries needs since they desire one-stop search and retrieval rather than a myriad of information silos
  • Libraries didn't pay enough for their ILS - but now that is changing since there are new standalone models to be purchased, maintenance costs, and supporting this technology
  • Some of the best ideas in online library services come from librarians themselves - because they can experiment, develop, and offer what works best for them, rather than having a non-librarian try to build something for librarians to use
  • Open source - very important to ILS - value of open standards and protocols - this is what librarians now believe they can create interoperability with vendors
  • Future = integration
  • "Library systems are changing because library assets are changing"
Muddiest Point: Why isn't there a common ILS used between vendors and librarians to begin with? As soon as 1 good/reliable ILS was developed, why wasn't that adopted as a standard method? Wouldn't this make it easier for both of them, rather than fighting against the tide of one another?

Saturday, September 27, 2008

Week Six Comments

Comment One:

https://www.blogger.com/comment.g?blogID=8117231295550149245&postID=3435180541295599241&page=1

Comment Two:

https://www.blogger.com/comment.g?blogID=5522596475792783454&postID=843628908685937135&page=1

Thursday, September 25, 2008

Zotero and CiteULike Assignment

The URL:

http://www.citeulike.org/user/Littleelmquist/library

For most part I did not have a whole lot of difficulty with this assignment. I found the CiteULike has a lot of articles, but not for 2 of my interests, so I had to go a little bit further from what is really relevant to what is relevant for rural librarianship and the history of librarianship.

I think I am going to keep updating my folders, seeing as though I needed some of the stuff for assignments and it is such a good way to keep track of materials you know you could use in the near future.

Wednesday, September 24, 2008

Post on Dicussion Board

I posted a short essay on the Discussion board about UTubes and Libraries.

Week Six Reading Notes

UTube - "Common Types of Networks"

I really liked the ease of this video - he explained it in a simple and clear manner without tons of examples to bog you down or confuse you.
The most common networks in order (1-5) according to him:
1. Personal Area Network - like a personal computer's printer, this is the most common network
2. Land/Local Area Network - in 1 building, or in one type of facility
3. Wide Area Network - large geographic area
4. Campus Area Networks - spans over several different facilities connected through either wire or wireless on college campuses
5. Metropolitan Area Networks - many many connections - very big and ever expanding

Wiki's: "Computer Network"

This is a little more in depth than the UTube video. It gives a definition of computer networks - a group of interconnected computers which are classified according to their characteristics. These characteristics are scale, connection method (hardware technology to connect), functional relationship, and network topology (the way in which devices in the network see their logical relations to one another). Wiki goes into much more detail to explain the different types of networks such as the global area network (GAN)and their connections such as what is internetwork (intranet, Internet, and externet). GAN is basically a model for "supporting mobile communications across an arbitrary number of wireless LANS, satellite coverage area, ect." One of the most interesting things I learned is how the Internet is not considered to be a part of intranet and externet and the differences between the three networks. Wiki also talks about the basic hardware components which are network interface cards, repeaters, hubs, bridges, switches, and routers. The most interesting thing I learned about the network hardware components is about the router being a networking device which forwards "data packets between networks using headers and forwarding tables to determine the best path to forward the packets." I found this interesting because a router is a common enough piece of equipment many people rely on, but not necessarily ever thought about how it works/what it does.

Wiki's: "Local Area Network"

More of a historical look at how the LAN developed. It gives the advantages of LAN over WANs - such as their "higher data-transfer rates, smaller geographic range, and the lack of need for leased communication lines." This article talks about the history of the earlier systems, then the personal computer, and finally cabling. It also talks about the technical aspects such as how ethernet is the most common data link layer protocol and IP, but that there are still many different options that are being used by smaller groups of people. The most interesting thing I learned from this article is that eventually LANs can become MANs, WANs, or a part of the Internet depending on how the connections are made.

Karen Coyle's "Management of RFID in Libaries"

I found this article to be the most interesting in this week's set. There are so many possibilities with RFID's and although there are some things which must be worked on and other problems that must be solved, I really think RFID's could help change the ways libraries manage their materials and in a less expensive way too.

Background on RFID's:

  • RFID - R(Radio) F(Frequency) ID (identifier) = consists of a computer chip and an antenna
  • Metaphor given helps understand better what an RFID is - "RFID is like a barcode but is read with an electromagnetic field rather than by a laser beam", but that the similarities end there (486).
  • More advanced than a barcode - does not have to be visible to be read, can carry a more complex message, the chip can carry many bytes of information (hopefully will expand even more in the future)
  • It is not a single technology - different RFID products on the market today such as tags that are used for automated toll taking, card keys to gain entrance, or those to track animals
  • Considering future use of them to stop counterfeiting of many products
  • And what varies between them is the amount of information that they carry
  • Libraries use the lower priced ones - with short ranges and limited functionality

Should Libraries use RFID?

  • Privacy Issues
  • In defense of the library using RFID's: "Libraries use new technologies because the conditions in the general environment that led to the development of the technology are also the conditions in which the library operates."

RFID and Library Functions

  • RFID for tracking in and out items rather than barcodes- because items are returned it is not considered a "throw away RFID" and makes sense since it costs less than barcodes = more bang for their buck
  • Can be used for security too - checked-in and checked-out tag can be read by the security gates. But there are problems - can be shielded by Mylar, can be removed easily
  • But there is savings for security because a single RFID tag can do multiple functions (not just a security tag, not just a barcode, ect) = integrated circulation and security system
  • RFID's (unlike barcodes) can read multiple tags at once allowing one to check out a single stack of books with a single transaction
  • Can be read on the shelf - time of taking inventory goes down, as well as the cost goes down, and more inventories can be completed

Justifcation and Return Investment

  • Because libraries do not measure profit as part of their equation, it makes it harder to demonstrate the RFIDs are worth their cost
  • See Laura Smart's article for 14 gains of libraries if they used this technology
  • Can become 100 % self-check out or have a 50-50 between choosing what one you want to do - THE ONLY PROBLEM - replacing librarians for machines - less jobs available for librarians

Some Problems Remain

  • Problem of less sturdy and oddball items such as cases, magazines, CDS, and sheet music
  • If this does not work for such items then the alternative system still needs to be maintained - two systems running at once - ends up being more expensive, more time consuming, and probably causes more confusion
  • Which way are RFID tags leaning? Towards durability to carry information or to eventually be thrown away and not durable? Libraries need durable tags, and if the market shifts towards throw-away ones, there could be many problems

Muddiest Point from readings: Hubs - and why are they used for specific places - and why they aren't use for all network hardware?

Friday, September 19, 2008

Thursday, September 18, 2008

Week 5 Readings

Blogger is not coming through for me to post my reading notes... at least not for my reading notes... so I hope this works and this is just to say that my post is in the blackboard under "blogs" for week 5.

Sorry for any inconveniences, I'll try to get it posted up here as soon as it works.

Again Muddiest Point: All the problems I have been having with foxfire since I have downloaded the newer version!!!!!!!!!!

Saturday, September 13, 2008

Comments on Week 4 Posts

Comment 1: Melanija's Blog

https://www.blogger.com/comment.g?blogID=8029602389736197544&postID=5058611067929963371&page=1


Comment 2: Corrine's Blog

https://courseweb.pitt.edu/webapps/portal/frameset.jsp?tab_id=_2_1&url=%2Fwebapps%2Fblackboard%2Fexecute%2Flauncher%3Ftype%3DCourse%26id%3D_9047_1%26url%3D

Thursday, September 11, 2008

Week Four Readings

"Database" - Wiki
  • A Computer Database = structured collections of records or data, organized by a database model, that is stored in a computer system
  • The computer database relies upon software, known as database management system, to organize the storage of data
  • The first database management systems (dms) were developed in the 1960s with the two key models being CODASYL (network model) and IMS (hierarchical)
  • IDMS were also the rave in the 1960s with PICK and MUMPS databases as the most popular
  • In the 1970s, the relational model was proposed, but for a long time it was only an academic interest with more of a theoretical perspective which did not appear until the 1976 with System R and Ingres, but even then it was not a commercial product, only research prototypes
  • By the early 1980s relational computer database became commercial products with the launch of Oracle and DB2
  • 1980s research focused on distributed database systems
  • 1990s research was focused on object-orientated databases
  • 2000s - innovation focused on the XML database
  • Database Model Structure
  1. Hierarchical model - data is organized into an inverted tree-like structure with a downward link in each node to describe nesting.
  2. Network model - records can participate in any number of named relationships - each relationship associates a record known as the owner with multiple records of a member
  3. Relational model - information is represented in columns and rows
  • Database Management Systems: Relational database management systems, post-relational database models, and object database models
  • DBMS internals
  1. Storage and physical database design - such as flat files, ISAM, heaps, hash buckets, or B+ trees (most common are B+ trees and ISAM)
  2. Indexing - most common is a sorted list of the contents of some particular table column with pointers to the row associated with the value (allows it to be located quickly)
  3. Transactions and concurrency - should enforce ACID rules of: atomicity, consistency, isolation, and durability but many DBMS allow these rules to be relaxed for better performance
  4. Replication - closely related to transactions with concepts including: Master/Slave replication, quorum, and multimaster
  5. Security - to protect the database from unintended activity through an access control, auditing, and encryption
  6. Locking - how the database handles multiple concurrent operations with locks being generally shared (take ownership one from the other of the current data structure) or exclusive (no other lock can acquire the current data object as long as the lock last)
  7. Architecture - a combination of strategies are used such as OLTP systems use row-orientated datastore architecture
  • Applications of databases - the preferred method of storage for large multiuser applications, where coordination between many users is needed
  • Some DBMS products that might be familiar: BerkeleyDB, Datawasp, FileMaker, IBM IMS, Interbase, and Microsoft Access
"Introduction to Metadata, Pathways to Digital Information"

Metadata is a term used to describe many different forms of data by many different professions. Each profession may use the term in a different way but what they have in common is that they are communities that "design, create, describe, preserve, and use information systems and resources" (1). All information objects have three key features: content, context, and structure which are all reflected through metadata.
In our field, library metadata has focused on providing intellectual and physical access to content whether through indexes, abstracts, or catalogue records. Another aspect of our field, archival and museum studies also uses metedata to organize their information. But in this area, they focus on context - preserving the context is what preserves the value of records and artifacts.
The structure of information has been focused on less in this field, but even so, it is still an important component because the professionals realize that the more organized the structure is, the more it can be used to search for information objects.
Metadata outside the repository is explained and used for a broad scope of describing different acts and information. For an example, with the Internet it could refer to the information being encoded into HTML. To an electronic archivist may use it to refer to "all the contextual, processing, and use information needed to identify document....an active of archival record..." (3).
The Dublin Core Metadata Element Set is acknowledged in this article as identifying simple sets of metadata elements that can be used by any community to describe and search across many different information resources on the WWW. This is necessary in order to make sure that the different descriptions of metadata can be searched for and found throughout the WWW.
This article further categorizes metadata so that it is easier to understand within different terms. The categories are administrative, descriptive, preservation, use, and technical metadata. Metadata also has certain attributes and characteristics such as the attribute method of metadata creation with the characteristics of automatic metadata generated by a computer and manual metadata created by humans (5).
Figure 1 makes it easier to understand the life cycle of objects contained in a digital information system (the phases that information goes through during their life in a digital environment). These phases are: Creation and Multi-versioning, organization, searching and retrieval, utilization, and preservation and disposition.
So why is metadata important? Because it increases accessibility, it retains context, expands the use of information, and can heighten the interest in multi-versioning of the information. It also helps legal issues, the preservation of digital information, and allows system improvements both technological and from the economical standpoint.

Muddiest Point: Why does the technological professionals allow metadata to describe so many things? You think (and I have) that people would become confused and just be like "okay, lets come up with a better term for this distinctive set of metadata."

"An Overview of the Dublin Core Data Model"

At first, the article is very "muddy" and hard to understand. I think their introduction and DCMI Requirements were chock-full of terminology which may throw off a less-technical savvy person from reading and understanding the DCMI.
But thankful, the article gets easier to understand, for a short period of time at least -I get the ideas of its following goals of internationalization, modularization/extensibility, element identity, semantic refinement, identification of encoding schemes, specification of controlled vocabularies, and identification of structured compound values. I think the important thing is for a student to understand the DCMI's goals to create an easy to use description that is able to succeed at the global level. So in theory, the DCMI importance is to be able to provide each property defined by a unique identity along with "human readable labels and clear semantic definitions."

Google Desktop Discussion

I thought that he may make a discussion group for the Desktop question, if so, I'll copy and paste it from here to there - but otherwise I'm playing it safe and putting it on here...

The ability of Google Desktop to search all the files on my computer just amazed me. After looking at the three options to download I chose the Google Desktop. What do I think this means to the future of the library and to librarians? For one, if you click the Yahoo download the X1 advertisement begins and one of the things it says it "Tired of the organization battle?" Tools like Google Desktop will begin to have a large impact on how people rely on other applications to find more and more information - even on their computer! The information overload is apparent in every situation in life - even your desktop - and Google has begun to realize how much time librarians (and people in general) would save if they could just find that one document they used for this essay or the one email that had all the answers to that ..common reference question... So with this tool people can rely less on their memory of what the document was called and more on Google Desktop to quickly find it. And I have worked as a reference librarian so to me, this means less time searching for such things and more time for me to complete other tasks that need to be done in.

Sunday, September 7, 2008

Week Three Comments

https://www.blogger.com/comment.g?blogID=8616439771751223038&postID=6555204566696578378&page=1

https://www.blogger.com/comment.g?blogID=5720842264846496247&postID=144324376817901087&page=1

https://www.blogger.com/comment.g?blogID=8117231295550149245&postID=3240127541408633912&page=1

Week 3 Additional Reading Post

"What is Mac OS X?"

This article helped with identifying (more than the wiki article) what exactly makes a Mac OS X so different from a PC with Microsoft Windows system, even though the author isn't contrast and comparing them directly. The architecture "tab" is a little beyond me in computer language and I am guessing most people who are thinking of buying a Mac... but the features and software part would be most helpful for those considering whether or not to switch to Mac. My only two concerns are is that he specifically lets people know this article should not be used for reference and (2) right in the beginning he is talking about giving a hacker "over-friendly" information about the Mac OS X system. Once again, as discussed before, this is something I am surprised to see being tied in with UP classes.

Blackboard Discussion

(Posted in Week 2 Discussion Point: Digitization)

Wednesday, September 3, 2008

Week Three Readings

"An Update on the Windows Roadmap"

This article focuses on reassuring the customers of Microsoft Windows XP, Vista and future customers of Windows 7 that Microsoft really "cares" about the concerns and problems that they have been experiencing since the debut of Vista. I wish I could have read this article a year ago when Vista began to be used at one of the regional Pitt campuses where I was beginning my senior year. So many people had problems, and many students completely avoided the use of computers in certain halls because they knew the systems were Vista.

The article/letter begins by explaining that XP users will receive support until 2014. It goes on to let consumers know that they can still purchase XP until they are comfortable enough to switch over to Vista. I believe that the interesting discussion at the bottom of this article brought up many interesting debates that are going on with switching large companies from XP to Vista. Such as costs, time of training, system support/failure, and the realization that Windows 7 will be released around 2010. I agree with the larger companies, that have the money and the people and want to be competitive should switch to Vista whereas small businesses, libraries, and even some schools such as elementary schools would be best off waiting for the Windows 7. Many of these smaller businesses and libraries don't have the money to update, nor the people to train the staff to use Vista. When a new system is right around the corner, I believe they would be better off spending the money in other much needed areas.

Another interesting part of this letter/article was how Microsoft created a "comprehensive 'telemetry system' that lets us gather anonymous information about how real customers are using Windows Vista, and what their experiences are with real applications..." I would be interested to know those who are more technological savvy than myself whether or not this really helped, or was used for more than just that purpose. The article states that they helped them copy files up to 50% more quickly and were able to diagnose and fix top problems of customers. I would have to agree with the article, because I really do not know much about technology.

Finally, he closes his letter by reassuring that Microsoft Windows 7 will have the same core architecture as Vista so that investments made by users pay off because the Windows will be compatible. I think this is trying to customers not to hesitate (as much as many did) to buy the new Windows 7 when it comes out due to prior experiences with Vista, but rather get them excited about a new, more advanced Windows.

"Mac OS X"
First of all, I must admit that there is a very interesting conversation going on in "Lindsay's blog" about the use of wiki articles for class.

But I will say that this article tries to be objective about the Mac systems that have come out in the past and are set to come out in the future. I'm not too familiar with Mac, only having used one for an art class for one semster, that my muddiest point would have to be what is the "BIG" difference between Mac and PCs? What is the big rave about Mac being so much better, but than this article highlights how many problems they have had with their systems and how many times they have had to change it/make new ones? Just like Microsoft, this article claims that the launch of Mac OS X in March 2001 was recognized as important because of it was realized from the beginning that it was "a base on which to improve." At the customers expense... same as Vista's problems, right?


Muddiest Point: MAC vs PC and the big rave about Mac when this article seems to show that Mac has experienced just as many setbacks and problems with new systems as Microsoft.

"Introduction to Linux: A Hands on Guide"

  1. Freely available system to download from the Internet
  2. A clone of UNIX
  3. Linux developers concentrated on networking and services in the beginning, and now are focusing on office applications a the last barrier to be solved
  4. Well known for it reliability & stability
  5. Not only used for PCs but also for other electronic gadgets such as mobiles and PDA's
  6. Has everything a good programmer wants: compilers, libraries, development and debugging tools
  7. Realized that if Linux was ever to be a important player that they would have to create a system that was more user friendly
  8. New system is much different from older complex downloads - you don't even have to type a single character but you still have access to the core of the system when needed (and for those who want to do everything manually, Linux allows you to do that too)
  9. Open Source = people can adapt it, fix it, debug it, and then put it back out there so that others can use it. This allows more people working at it and therefore new discovers are made and problems are solved much more quickly than if it was a closed group of programmers working on it.
  10. Packages are available to fit most systems so that new users are more comfortable trying it out
  11. PROS: free, GPU Public License agreement, runs without rebooting all of the time, security, can add or remove packages to fit customers specific needs, and because so many people are using it and adapting it to their needs - sometimes it only take a few hours before a bug is discovered and fixed.
  12. CONS: more people = more opinions so there are many to pick from (can seem overwhelming), not very user friendly and can be confusing for new users, and people question whether it is it trustworthy because it is an open product?
  13. Based on GNU tools so it provides a set of standards of how to use and handle the system
  14. When asking what one should I install - hardware is the most important factor to consider when deciding if it work correctly on your computer

I used the outline this time for the notes because I feel that I don't have much of an opinion or knowledge on this article since I have never used Linux before (or know of anyone who uses it). Hopefully this will help me get the main points of this article, but I am still looking forward to what others have to say about it or know about it.

Muddiest Class Point for 9/2/08 class session

After the answer/small discussion about direct and sequential access I was confused by what can be or is considered direct or sequential access.

Tuesday, September 2, 2008

Ongoing Discussion

Another Comment:

https://www.blogger.com/comment.g?blogID=7943070653086840690&postID=2863922694743418879

Sunday, August 31, 2008

Week 2 Comment

Comment on Dustin's Blog:

https://www.blogger.com/comment.g?blogID=8599774071021712765&postID=4431557911953081745

Comment on Lindsay's Blog
https://www.blogger.com/comment.g?blogID=7943070653086840690&postID=3107558838544588712

Second Readings

"Moore's Law"

It is interesting how the articles states common misconceptions that many people have with Moore's law - the belief that it applies to everything in computer-technology products. Although "almost every measure of the capabilities of digital electronic devices is linked to Moore's law..." (goes to list a few) not everything, such as software and RAM increase exponentially. I believe that Moore brings an interesting topic to discuss or debate up when he states, "Moore's law has been the name given to everything that changes exponentially. I say, if Gore invented the Internet, I invented the exponential." Most of us would argue that he did not invent the exponential and that inventions are usually combined works of inventors throughout time, especially the Internet (now more quickly connected than ever). I think others more technological savvy and those with interests in the "invention" of the Internet could comment better on this quote. Finally, I agree that Moore's law is only a self-fulfilling prophecy to the marketers and engineers of these products because they, themselves so truly believed in Moore's law that they made it come true.

Muddiest Point: The computer terminology. I tried to get past the terminology, look up a few that I thought would be useful and tried to get the main ideas out of the article rather than all the technological jargon.

"Computer Hardware"
I am actually glad that we had this article - I was finally forced to read something I have avoided for years. A lot of computer physical parts and terminology seems a bit more clear to me now after reading this article. Interesting point made: that most computer hardware is not seen by normal users and that our personal computers only make up .2% of all new computers produced in 2003 (1). Another interesting thing to note is actually a conversation I was having with a student this past spring about HD DVD being discontinued because of Blue-ray. It seems as though almost everyone finally had the spectacular HD DVD equipment (and all the movies were advertised as HD DVDs) when low and behold, Blue-ray became the rave and the hottest "new" thing in computers.

Computer History Website
I'm glad I read the articles in the order that I did because this allowed me to understand what I was viewing once I was on the website. This website also further helped me to understand what exactly the articles were saying.

Important Point: The problem lies of understanding these articles lies within myself - I'm just not interested in all of the technology talk. I believe that if I had more interest in learning the jargon I would pick up a great deal more than I have from these articles and website. But at least it made a dent in my clueless about computer technology terminology.

Muddiest Point - Question

I am confused about when to post what assignments. I thought Assignment One for reading one was due this Friday, and Assignment two or readings for week two were due Tuesday? Anyone have a clue? I'll start reading now just to get it done in case it was suppose to be posted Friday.

Week 1 Comment

On Sanda's Blog:

https://www.blogger.com/comment.g?blogID=5762270205496001556&postID=3029922404703456112

Thursday, August 28, 2008

Reading Assignment #1

"2004 Information Format Trends: Content Not Containers"
This article addressed the changes taking place not only in libraries, but in our culture, society, and across the world because of new information technologies. One of the main ideas is that people have become "format agnostic;" meaning that people have little preference for how the information is contained and are more concerned about the information that they receive (for a low cost). Print is slowly declining, along with the increase in E-books, online magazine articles, and scholarly journals that can be found online because information online can be downsized to "micropayments" in return for smaller pieces of information that the user wants (rather than paying for an entire magazine if they only want to read one article out of it).

I found the section on blogs extremely interesting - the fact that 80 percent of those who read blogs do so for news that they can't find elsewhere says a lot about what kind of news content people around the world are either censored from or cannot get their hands on other than through the Internet. And even the demographics on age and income of those who read and create blogs is somewhat surprising. Most people would expect college students and the younger generation to be the majority, but 61% of blog readers are over the age of 30.

Another main point that is stressed is the changing responsibilities of libraries to the community. This article stresses how libraries need to change their role as moving beyond just being a collector and organizer of information to one that "establishes the authenticity and provenance of content and provides the imprimatur of quality in an information-rich but context poor world" (13). And because research suggests that patrons believe libraries should make content available through emerging information technologies such as through Web Services, it is up to libraries to synthesize all of this information into a medium that the community can use.

Muddiest Point: McLuhan's quote, "The medium is the message." I could be possibly over thinking it, but the explanation by Mark Federman does make it any more clear to me what McLuhan was trying to tell us.

"Lied Library @ Four Years: technology never stands still"
  • Paper covers: Evolution of Lied Library, Ongoing costs, challenges of Lied, the hopeful future of Lied.
  • Major installations and programs:
    • EZ Proxy, DiMeMa/OCLC's CONTENTdm, OCLC's virtual ref, Serial Solutions A-Z, SFX, the student laptop program, Uniprint, and Millennium
  • Biggest project to date: Replacement of every desktop PC (over 600 units in 2003)
    • Planned every possible step they could think of, and even thought ahead of possible problems that they would run into
    • How they did from July - August 2003 in time for the staff to be trained and the students arriving for Fall classes
      • Assembly Line installation and assemblage of computers
      • bar coded outgoing PCs the day before that area of PCs was to be removed
      • Keeping certain areas up and running, and limiting traffic and users around the computers that would be replaced the next day
  • Costs - ever increasing, such as hardware and operating system support, vendors software, and the upfront costs of computers and other machines (such as printers)
  • Challenges
    • Computing Resource Management
      • Make sure that students have computers to work on for schoolwork, such as limiting community user's time and the laptop program
    • Space Management
      • Biggest problem is library staff areas such as where to place new employees
    • Security
      • Theft - small number, but they want it at zero theft
      • Network security - malicious software
    • Equipment and Software glitches
      • Seem to want to be perfect for the paper states that Lied enjoys 99%+ uptime for all systems
  • Future
    • Maintaining current systems and finding the money to maintain them or upgrade the programs
    • Projected enrollment growth will create challenges for supply and space in Lied
    • Wireless Connection in library
    • Unknown impact of the change in library leadership
  • Conclusion:
    • Continuous effort towards refinement and expansion
    • Stay at the cutting-edge of technologies in libraries
    • Remain a distinctive premiere academic library and a place where all students want to come to do their research
Concern: Their community patrons are still patrons, it seems as though some of their steps to limit their time on the computer was drastic such as the use of Monitor to find a community user from a student to ask them to leave if a student needs a computer. What if they are not playing games or chatting? What if they are doing their own research or business emails? It seems as though the other steps taken to provide the students with more computers that the community can't use is enough action to make the students number one at Lied.

Muddiest Point: The technology/computer jargon throughout the entire article confused me. Also, the article being from 2005, I kept wondering how far this technology that they speak of is behind our current programs and installations, and what the new technology is - that I don't know about either (or could understand).

"Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture"
  • The meaning of information technology literacy compared to the meaning of information literacy which is the content and communication of these technologies.
  • Believes that it is crucial for people to know and understand both forms of literacy in order to successful function in today's society (these new technologies are the main way to receive and send information)
  • Goes far beyond traditional textual literacy that one learns as a basic in school
  • Information Technology Literacy
    • Two perspectives on information technology literacy:
      • one that focuses on the use of the tools through one's skills
      • second - one that focuses on the full understanding of these technologies
        • Questions why one would need to know about information technology literacy?
          • students must know both in order to be prepared for today's jobs and "all walks of life" (4) because technology shapes the way we live and how we view the world
            • not understanding information technology literacy would limit ones skills
    • Another key of information technology literacy: "understanding the principles of how the technology world works" (3)
      • needs to cover a broad view
      • needs to appreciate and understand how history, economics, social and public policy issues play an important part in information technology literacy and technology
  • Information Literacy
    • In learning about information literacy, the knowledge needs to encompass to the full range of all types of communication such as image and video
    • People need to have an understanding of how information resources are a part of technological and economic structures and how they interrelate
    • Range of issues related to information policies that a person must know:
      • legal, social, economic, ethical, privacy, authenticity, integrity, and management