I didn't know where to ask this, so I decided to relate it to my digital book collection (and internet research in general) and ask it here.
The current situation is roughly the following, I'm a book addict, I have a library of about 6,000+ books (I'll have to run a search to be sure) which I will probably never get through. The thing is though that most of the books have _some_ useful content, but since I am a voracious reader the average new information to known information ratio in books have gone down _significantly_ during my life. The same happens with internet research, the vast majority of information is not useful. It must however still be sifted through to get to the truly useful or interesting information.
So my question is this, how do I process the information in order to create a filter? I have considered skimming through the books, but that will take ages. I already read 600wpm, so there's not much that can be done in terms of that. When researching I read the first sentence and judge the rest of the page based on that (I have other metrics too, if the page is graphics heavy, or script heavy I close it, if there's a lot of adds I close it, if the layout is poor I mostly close it). What I need is some way to reduce a book to perhaps a summary of the most important points. However I do not know how to judge that without actually reading it. There's also some cases where I wish to "average" the information found in some books, they all make similar points, so pick out the parts where they differ and add that to the core where they are the same. My knowledge of text processing isn't very great, and what I've seen from computer processing of natural language is poor at best.
Any advice? I would appreciate both tech based and skill based techniques, if there's anything I can do differently in my habits it's fair game.