Three sorting algorithm ideas
Binary Sort
A sorting algorithm that sorts in constant amortized time, but which is proportional to the average length, in binary, of the elements being sorted
Let's say we have a function that returns a list of {0, 1} values which is a given element's binary representation. Now we make an array of the size of our input list, which we will use as a binary tree. Each node will be a struct instance. For each element in our list to be sorted, we navigate this tree using the element's binary sequence as the path, creating new branches as necessary. When we get to the end, we insert a reference to the original object into the beginning of a linked list of references to objects belonging at that node (i.e., equivalent elements). Then at the end of our sorting we simply walk the tree and traverse its linked lists to return our sorted list. If we want equivalent objects to appear in the same order in which they appeared in the original list, we can append them to the end of the linked lists instead and have the node store a pointer to the current last linked list item.
This is only the naive version, though.. we don't really have to make the full path for every element. Instead of having a function that returns an object's binary representation, we'll have a function that returns an iterator for it. Then instead of navigating/creating nodes to the end of the path, we navigate until we find a node who simply stores a pointer to an iterator for an element that's somewhere underneath it. Now, since a node can only contain one such iterator pointer, when you come to such a node you must poll values from *both* iterators (your current element's and the node's) and create a path until the values diverge, at which point you create two children and put your respective iterator and element pointers in those. Of course you'd nullify the pointers in the original node. You have a 50/50 chance of this happening on the first poll, a 3/4 chance of it happening by the second poll, etc. When an iterator exhausts itself you delete the iterator object, nullify the iterator pointer and start the linked list of equivalent elements for that node.
Alternatively to the linked lists we could simply have a pointer to the first instance of an equivalent member and a count value that specifies how many times it repeats, but that only works with members that don't have any 'hidden variables' with respect to their binary representations. Plain old strings or numbers would work just fine that way. Sorting person objects by last name, wouldn't.
This algorithm isn't generally suitable for mixed-type collections or any type that uses a special comparison function. Strings, ints, floating points and fixed decimals should work fine. The above is optimized for strings. For a numerical type, all iterators are exhausted at the same branching level (8, 16, 32, or 64), although that doesn't change the algorithm a whole lot -- maybe provides for some minor optimizations. But because comparing two numbers might be less expensive than grabbing a value from a binary iterator, we could forgo the binary representations altogether for numbers. Instead of a node storing a pointer to an iterator and an element, it would simply store a number. That could be a huge improvement. And even if we still used binary representations, most numeric values probably start with lots of 0's, so we could speed up the process by performing a BSF assembly command on it to determine the number of leading zeros, then jump directly to a node, skipping the first portion of the path constituting zeros, based on a table of pointers for numbers of leading zeros up to 8, 16, 32 or 64.
Binary Search Library Sort
Use the Library Sort method ( http://en.wikipedia.org/wiki/Library_sort ), but instead of just iterating through the list to find where an item belongs, do a binary search. Since we're starting with an empty gapped list, our elements in it will always be sorted. A new element will be placed in the first empty binary search location, so our search will never have to worry about traversing gaps.
Binary Search Sort
This is actually not a sorted list; it's a sorted linked list. Take one element and put it in the list. That element becomes the first search point. The next element becomes the next search point either larger than or smaller than the previous element. And so and and so on (except that identical elements exist in the same search point). This method actually seems to be identical to the binary sort method above where we use comparison instead of a binary representation. But we can perhaps improve the algorithm by changing search points. For example, each node can store the average value of the two nodes it's 'between' (its parent and grandparent, and we can also use the max and min of the whole collection for the first two two depths if we want), and if a value comes along that is about to be placed undeneath it, but it's closer to the parent's average value than its parent is, then it swaps itself with its parent node. For strings our average might just have to be a character and its place in the string.
Wednesday, December 17, 2008
Monday, December 15, 2008
The ultimate hacker's ensemble
1. A WiFi NIC that supports monitor/RFMON mode, likely a Cisco
2. A high-gain directional parabolic WiFi antenna, like the RadioLabs 24dB Parabolic Grid WiFi Antenna ($70)
3. Kismet to record packets, detect networks, etc. (free)
3. Aircrack-ng to crack WEP encryption (free)
4. A web browser. Now the trick is to feed those HTTP streams to the web browser. Since normally a web browser isn't accepting a web page from a server unless it had sent a request, we'd have to hack the web browser. I.e., we'd have to download the source to Firefox (open-source), or perhaps code our own web browser from webkit. There might be an easier way, though. Given a web browser that support the experimental Reverse HTTP, we could first send the command to put it into Reverse HTTP mode, and then every subsequent web page will be server-push. This still wouldn't likely work for AJAX or dynamic Flash, etc. applications with two-way stateful communications. But for regular old HTML it'll still be cool.
5. Wireshark to decode AIM, Live Messenger, etc. messages
6. A laptop (for war-driving/stalking), or a normal desktop PC if we just want to eavesdrop on networks from around our house
7. A car with heavily tinted windows (again, only if we want to stalk particular people/organizations or go war driving)
Now we stitch it all together programatically so that we can just point the antenna, select a random NIC on a random BSSID, and view their web sessions as if they were our own, and possibly have pop-ups for IM messages too. And we'd do this all completely passively - i.e., it would be physically impossible for them to detect that we're doing it or to know where we are. Sweeet
1. A WiFi NIC that supports monitor/RFMON mode, likely a Cisco
2. A high-gain directional parabolic WiFi antenna, like the RadioLabs 24dB Parabolic Grid WiFi Antenna ($70)
3. Kismet to record packets, detect networks, etc. (free)
3. Aircrack-ng to crack WEP encryption (free)
4. A web browser. Now the trick is to feed those HTTP streams to the web browser. Since normally a web browser isn't accepting a web page from a server unless it had sent a request, we'd have to hack the web browser. I.e., we'd have to download the source to Firefox (open-source), or perhaps code our own web browser from webkit. There might be an easier way, though. Given a web browser that support the experimental Reverse HTTP, we could first send the command to put it into Reverse HTTP mode, and then every subsequent web page will be server-push. This still wouldn't likely work for AJAX or dynamic Flash, etc. applications with two-way stateful communications. But for regular old HTML it'll still be cool.
5. Wireshark to decode AIM, Live Messenger, etc. messages
6. A laptop (for war-driving/stalking), or a normal desktop PC if we just want to eavesdrop on networks from around our house
7. A car with heavily tinted windows (again, only if we want to stalk particular people/organizations or go war driving)
Now we stitch it all together programatically so that we can just point the antenna, select a random NIC on a random BSSID, and view their web sessions as if they were our own, and possibly have pop-ups for IM messages too. And we'd do this all completely passively - i.e., it would be physically impossible for them to detect that we're doing it or to know where we are. Sweeet
Wider memory bus
We could probably make computers faster with a wider memory bus. Say, for example, that the bus speed is 800 Mhz and the CPU speed is 1.8 ghz. Then for every cycle of the bus, the CPU can process at most about 2 64-bit values, or 2 128-bit values with SIMD, or with four cores, perhaps 8 128-bit values. Thus, if we have a bus width of X bytes, we can define an opcode to retrieve up to X bytes of RAM from location Y and put it in the cache, and do it in fewer cycles. X could be, say, 128 bytes. (In the above scenario, requesting more than 32 bytes at once wouldn't be necessary for a given core, but perhaps a 128-byte-wide memory bus would aid in 4 cores each requesting 32 bytes at once.)
Since we don't know what the future of CPUs and bus speeds holds, for future compatibility we should probably make this bus width as wide as is practical, unless it can be increased later but in a way that's easily scalar as far as programmers are concerned.
This would save time every time memory is accessed sequentially, which is often.
I guess the pipelines within the RAM would have to be changed too? I don't know much about RAM.
Apparently a CPU has its own prediction mechanisms and automatic pre-caching. This automated pre-caching can equally take advantage of a wider memory bus, and conversely, my explicit op-code idea for pre-caching carries its own advantage independently of the idea of extending the bus width. That is, why leave it all up to the processor to predict what's going to happen, when the programmer (or possibly compiler/VM) can just *tell* it?
We could probably make computers faster with a wider memory bus. Say, for example, that the bus speed is 800 Mhz and the CPU speed is 1.8 ghz. Then for every cycle of the bus, the CPU can process at most about 2 64-bit values, or 2 128-bit values with SIMD, or with four cores, perhaps 8 128-bit values. Thus, if we have a bus width of X bytes, we can define an opcode to retrieve up to X bytes of RAM from location Y and put it in the cache, and do it in fewer cycles. X could be, say, 128 bytes. (In the above scenario, requesting more than 32 bytes at once wouldn't be necessary for a given core, but perhaps a 128-byte-wide memory bus would aid in 4 cores each requesting 32 bytes at once.)
Since we don't know what the future of CPUs and bus speeds holds, for future compatibility we should probably make this bus width as wide as is practical, unless it can be increased later but in a way that's easily scalar as far as programmers are concerned.
This would save time every time memory is accessed sequentially, which is often.
I guess the pipelines within the RAM would have to be changed too? I don't know much about RAM.
Apparently a CPU has its own prediction mechanisms and automatic pre-caching. This automated pre-caching can equally take advantage of a wider memory bus, and conversely, my explicit op-code idea for pre-caching carries its own advantage independently of the idea of extending the bus width. That is, why leave it all up to the processor to predict what's going to happen, when the programmer (or possibly compiler/VM) can just *tell* it?
Memory controller transactions
To help with IPC, the memory controller should support transactions. Since all CPUs must use the same memory controller in order to use the same RAM, this is analogous to a DBS supporting transactions that multiple clients may simultaneously connect to. By supporting it in hardware at the point where memory updating is necessarily done one-at-atime anyway, you could eliminate the convolutions and dilemmas of trying to eliminate race conditions in software.
Actually, I don't know how transactions work very much. But the memory controller could implement a lock so that the first CPU to request a transaction can have exclusive use until the transaction is over. I hear that transactions normally use a queue, but that doesn't seem necessary in this case. In the same amount of time it would take a cpu to queue a transaction, the memory controller could already have committed the last one to memory. So it's really just about a lock.
Perhaps while one core/cpu is completing a "transaction", another can be modifying memory in a different place, that's defined to not be in the area of that transaction. For example, the opcode for a "transaction" could include a number of bytes of contiguous memory to be considered as part of that transaction.
To help with IPC, the memory controller should support transactions. Since all CPUs must use the same memory controller in order to use the same RAM, this is analogous to a DBS supporting transactions that multiple clients may simultaneously connect to. By supporting it in hardware at the point where memory updating is necessarily done one-at-atime anyway, you could eliminate the convolutions and dilemmas of trying to eliminate race conditions in software.
Actually, I don't know how transactions work very much. But the memory controller could implement a lock so that the first CPU to request a transaction can have exclusive use until the transaction is over. I hear that transactions normally use a queue, but that doesn't seem necessary in this case. In the same amount of time it would take a cpu to queue a transaction, the memory controller could already have committed the last one to memory. So it's really just about a lock.
Perhaps while one core/cpu is completing a "transaction", another can be modifying memory in a different place, that's defined to not be in the area of that transaction. For example, the opcode for a "transaction" could include a number of bytes of contiguous memory to be considered as part of that transaction.
CPU paging support
The CPU should store, probably within the page tables, a value indicating the last time any given page of memory was accessed. That way the OS can ask the CPU when a page was last accessed to help it with its swapping algorithm. It seems to me that swapping by time since last accessed would be almost as effective as, and a lot simpler than, using some sort of frequency of use algorithm. Without this feature, the OS has no way of knowing when the last time a page of memory in RAM was accessed was.
I don't know what mechanism the OS uses now for knowing what and when to swap, but I know it must be vastly less efficient than this. I would recommend just swapping out and in at the granularity of memory pages. If I remember correctly, conveniently a page is 4 Kb which is the same minimal amount of storage a harddrive can read or write in one command.
The CPU should store, probably within the page tables, a value indicating the last time any given page of memory was accessed. That way the OS can ask the CPU when a page was last accessed to help it with its swapping algorithm. It seems to me that swapping by time since last accessed would be almost as effective as, and a lot simpler than, using some sort of frequency of use algorithm. Without this feature, the OS has no way of knowing when the last time a page of memory in RAM was accessed was.
I don't know what mechanism the OS uses now for knowing what and when to swap, but I know it must be vastly less efficient than this. I would recommend just swapping out and in at the granularity of memory pages. If I remember correctly, conveniently a page is 4 Kb which is the same minimal amount of storage a harddrive can read or write in one command.
Saturday, December 13, 2008
Search-related metadata
I just sent this to google
--
I was thinking about the schemas idea behind WinFS, and how it might be applied to the intarweb. I decided WinFS isn't particularly relevant to the web, and started looking up metadata for html. I don't think it's exactly what I had in mind either. I was trying to think of something that would make it easier to map the interrelations between webpages and allow authors to write metadata that aids in intelligent web searching capability. WinFs-like schemas and XSD are only about breaking down particular things, like addresses. So I decided to think up my own schema for this purpose.
The reason I'm telling you guys this is that Google supporting a particular metadata schema for its search would probably be the most effective--maybe the only--way for such a schema to become widely used. (The point of its becoming widely used is to make web searching more powerful.)
This schema would not have any mechanism for defining relationships between parts of a webpage -- only relationships between the webpage and various ideas/words or perhaps even other webpages, at least in its main intention, although doing that may also be possible -- parts of a web page could possibly be sectioned off with respect to the pertinent metadata, so perhaps in somewhat the same way they can relate to other webpages, they can relate to other parts of the webpage. This may or may not be useful for search engine purposes, but it could be useful for other general metadata-related purposes.
Here are my ideas for relationships:
- this webpage is vaguely relevant to Y in some way
- this webpage is closely relevant to Y in some way
- this webpage is about something particular to Y, in understanding
- this webpage is about something particular to Y, in function
- this webpage is about something that Y is is particular to, in understanding
- this webpage is about something that Y is is particular to, in function
- this webpage is about something particular to Y and that Y is dependent on, for understanding
- this webpage is about something particular to Y and that Y is dependent on, in function
- this webpage is about something that complements Y, in understanding
- this webpage is about something that complements Y, in function
- this webpage is about something that is dependent on Y, in understanding
- this webpage is about something that is dependent on Y, in function
- this webpage is about something that Y is dependent on, in understanding
- this webpage is about something that Y is dependent on, in function
- what this webpage is about and Y are mutually dependent, in understanding
- what this webpage is about and Y are mutually dependent, in function
..where Y could possibly be a word, a phrase, a key word out of a predefined set of key words, a URL, a metadata-delineated section of a webpage, or a list of any of the above. In all but the first two, Y is most likely a URL or webpage section.
- any of the above with the added qualifier that it applies ONLY to Y
- any of the above with the added qualifier that it was designed specifically to apply to Y
- by design, this webpage is particular to Y and is useless without Y in its functionality
- by design, Y is particular to this webpage is useless without Y in its functionality
The last two have to do with the mechanics of a website -- for example, a page for filling out a form would be useless without the main website. This is why they don't say "is about"; the webpages ARE the functionality. "by design" isn't coupled with "in understanding" for more relations because those are already covered by above relations including the added qualifiers.
I'm not saying one way or another on whether the two added qualifiers above are to be used as qualifiers in the syntax or to combinatorily create 48 more relationnship types. With enough good ideas, the metadata syntax could actually grow into a grammar of its own, which would definitely call for making them qualifiers.
I'm sure there are more good ideas for general relationships that I haven't thought of.
The syntax could also specify domains that a webpage falls under. Some examples would be:
- technical document
- interactive
- entertainment
- media
Perhaps even a hierarchy of domains, perhaps modeled after Yahoo! Directory, dmoz.org, or similar. And then a tree relationship between webpages could be automatically generated, so people can search for coordinate sisters, etc.
But even in the above, they're not all mutually exclusive. They could be made into key words, but then you'd lose the hierarchical aspect -- or maybe not. You could combine the two and have key words that aren't mutually exclusive but have hierarchical relationships to each other. Or, you could forgo hierarchies altogether and have a more web-like system where everything is interconnected according to various relationships but there's no top or bottom to it; it's not a tree-like structure. The interrelationships between the key words could be pre-defined, or perhaps they could be extrapolated somehow from analyzing the topology of hyperlinks in webpages with key words defined -- but then the relationship probably wouldn't be very semantical.
Speaking of semantics, whatever words or key words that appear in WordNet would lend themselves to automatic relational mappings according to all the relationships that that WordNet covers. Also, the relationship-type metadata defined above would lend itself to a hierarchical or web-like structures wherever chains of sub-part or complement relationships can be found, which could also aid in searching ability.
I haven't included any ideas for raw syntax of the metadata because I consider that implementation details. Needless to say, it would be something in XML that's invisible to web browsers.
I just sent this to google
--
I was thinking about the schemas idea behind WinFS, and how it might be applied to the intarweb. I decided WinFS isn't particularly relevant to the web, and started looking up metadata for html. I don't think it's exactly what I had in mind either. I was trying to think of something that would make it easier to map the interrelations between webpages and allow authors to write metadata that aids in intelligent web searching capability. WinFs-like schemas and XSD are only about breaking down particular things, like addresses. So I decided to think up my own schema for this purpose.
The reason I'm telling you guys this is that Google supporting a particular metadata schema for its search would probably be the most effective--maybe the only--way for such a schema to become widely used. (The point of its becoming widely used is to make web searching more powerful.)
This schema would not have any mechanism for defining relationships between parts of a webpage -- only relationships between the webpage and various ideas/words or perhaps even other webpages, at least in its main intention, although doing that may also be possible -- parts of a web page could possibly be sectioned off with respect to the pertinent metadata, so perhaps in somewhat the same way they can relate to other webpages, they can relate to other parts of the webpage. This may or may not be useful for search engine purposes, but it could be useful for other general metadata-related purposes.
Here are my ideas for relationships:
- this webpage is vaguely relevant to Y in some way
- this webpage is closely relevant to Y in some way
- this webpage is about something particular to Y, in understanding
- this webpage is about something particular to Y, in function
- this webpage is about something that Y is is particular to, in understanding
- this webpage is about something that Y is is particular to, in function
- this webpage is about something particular to Y and that Y is dependent on, for understanding
- this webpage is about something particular to Y and that Y is dependent on, in function
- this webpage is about something that complements Y, in understanding
- this webpage is about something that complements Y, in function
- this webpage is about something that is dependent on Y, in understanding
- this webpage is about something that is dependent on Y, in function
- this webpage is about something that Y is dependent on, in understanding
- this webpage is about something that Y is dependent on, in function
- what this webpage is about and Y are mutually dependent, in understanding
- what this webpage is about and Y are mutually dependent, in function
..where Y could possibly be a word, a phrase, a key word out of a predefined set of key words, a URL, a metadata-delineated section of a webpage, or a list of any of the above. In all but the first two, Y is most likely a URL or webpage section.
- any of the above with the added qualifier that it applies ONLY to Y
- any of the above with the added qualifier that it was designed specifically to apply to Y
- by design, this webpage is particular to Y and is useless without Y in its functionality
- by design, Y is particular to this webpage is useless without Y in its functionality
The last two have to do with the mechanics of a website -- for example, a page for filling out a form would be useless without the main website. This is why they don't say "is about"; the webpages ARE the functionality. "by design" isn't coupled with "in understanding" for more relations because those are already covered by above relations including the added qualifiers.
I'm not saying one way or another on whether the two added qualifiers above are to be used as qualifiers in the syntax or to combinatorily create 48 more relationnship types. With enough good ideas, the metadata syntax could actually grow into a grammar of its own, which would definitely call for making them qualifiers.
I'm sure there are more good ideas for general relationships that I haven't thought of.
The syntax could also specify domains that a webpage falls under. Some examples would be:
- technical document
- interactive
- entertainment
- media
Perhaps even a hierarchy of domains, perhaps modeled after Yahoo! Directory, dmoz.org, or similar. And then a tree relationship between webpages could be automatically generated, so people can search for coordinate sisters, etc.
But even in the above, they're not all mutually exclusive. They could be made into key words, but then you'd lose the hierarchical aspect -- or maybe not. You could combine the two and have key words that aren't mutually exclusive but have hierarchical relationships to each other. Or, you could forgo hierarchies altogether and have a more web-like system where everything is interconnected according to various relationships but there's no top or bottom to it; it's not a tree-like structure. The interrelationships between the key words could be pre-defined, or perhaps they could be extrapolated somehow from analyzing the topology of hyperlinks in webpages with key words defined -- but then the relationship probably wouldn't be very semantical.
Speaking of semantics, whatever words or key words that appear in WordNet would lend themselves to automatic relational mappings according to all the relationships that that WordNet covers. Also, the relationship-type metadata defined above would lend itself to a hierarchical or web-like structures wherever chains of sub-part or complement relationships can be found, which could also aid in searching ability.
I haven't included any ideas for raw syntax of the metadata because I consider that implementation details. Needless to say, it would be something in XML that's invisible to web browsers.
Friday, December 12, 2008
Google Captchas
most captchas out there can be cracked by software nowdays with enough effort.
google had a program the other year where they would show any random two users who are participating a given image, and for x amount of time they'd both type in as many possible tags for that image as they could, and the tags that they both happened to type in are associated with that image.
my idea is to use that database to show those images - as thumbnails - as captchas, and let the user type in an image tag. if it matches any tag in the database for that picture, they pass. this would have to be a captcha service provided by google.
so as not to frustrate users, we should probably include all tags typed in by both parties - not only teh ones agreed upon - presuming that information is included in the database.
in case thumbs are too small in too many cases, there could be a button to show the image full-size , perhaps in a new window, although it would be just as easy to recycle the image. or images could be shown full-size by default--perhaps in a new window, perhaps on the same page, or perhaps it could pop up just when you mouse over the thumb, like profile pictures in apps.facebook.com/yesnomaybe.
most captchas out there can be cracked by software nowdays with enough effort.
google had a program the other year where they would show any random two users who are participating a given image, and for x amount of time they'd both type in as many possible tags for that image as they could, and the tags that they both happened to type in are associated with that image.
my idea is to use that database to show those images - as thumbnails - as captchas, and let the user type in an image tag. if it matches any tag in the database for that picture, they pass. this would have to be a captcha service provided by google.
so as not to frustrate users, we should probably include all tags typed in by both parties - not only teh ones agreed upon - presuming that information is included in the database.
in case thumbs are too small in too many cases, there could be a button to show the image full-size , perhaps in a new window, although it would be just as easy to recycle the image. or images could be shown full-size by default--perhaps in a new window, perhaps on the same page, or perhaps it could pop up just when you mouse over the thumb, like profile pictures in apps.facebook.com/yesnomaybe.
Wednesday, December 03, 2008
Google Earth Flight Sim
Use Google Earth data, including but not limited to the 3d building data in certain cities, to provide accurate worldwide data for a flight simulation application. Since Google Earth uses 70+ TB, distribution would have to be limited to relatively small areas, or the flight simulator would have to be constantly communicating with Google Earth if the bandwidth would be sufficient.
It won't be long, though, until HDD's are large enough to store the whole thing (they're up to a TB now, and it was just a few years ago that 1 GB was a huge harddrive), although media to distribute the program in might be a different matter (blu-ray is only 25 GB, and we don't even really have it yet).
Algorithms should probably be used to sharpen the images for very low flying, and perhaps create tree structures where there appears to be vegetation, extrapolate houses and buildings to 3d, and simulate water where there appears to be lakes, and try to extrapolate height data for hills and mountains.
Use Google Earth data, including but not limited to the 3d building data in certain cities, to provide accurate worldwide data for a flight simulation application. Since Google Earth uses 70+ TB, distribution would have to be limited to relatively small areas, or the flight simulator would have to be constantly communicating with Google Earth if the bandwidth would be sufficient.
It won't be long, though, until HDD's are large enough to store the whole thing (they're up to a TB now, and it was just a few years ago that 1 GB was a huge harddrive), although media to distribute the program in might be a different matter (blu-ray is only 25 GB, and we don't even really have it yet).
Algorithms should probably be used to sharpen the images for very low flying, and perhaps create tree structures where there appears to be vegetation, extrapolate houses and buildings to 3d, and simulate water where there appears to be lakes, and try to extrapolate height data for hills and mountains.
Balloon Telescopes
Make a very large, clear balloon, fill the bottom portion of it with water, and presto, you have a huge lens. Then make the smaller lenses in the appropriate shapes to complement the odd shape of the larger lens, and presto, you have a gigantic telescope.
Or, make the balloon and air balloon and coat the bottom of the inside of it with silver, then put the complementary mirrors inside of it. Presto, a gigantic reflecting telescope. You could even float it up to high in the atmosphere.
Make a very large, clear balloon, fill the bottom portion of it with water, and presto, you have a huge lens. Then make the smaller lenses in the appropriate shapes to complement the odd shape of the larger lens, and presto, you have a gigantic telescope.
Or, make the balloon and air balloon and coat the bottom of the inside of it with silver, then put the complementary mirrors inside of it. Presto, a gigantic reflecting telescope. You could even float it up to high in the atmosphere.
Wednesday, November 26, 2008
Space Needle
to get into space: use a balloon - heat or helium - to rise the craft as far as possible. then you start moving as fast as you can using a solar power or maybe a rocket. the balloon should be shaped like a very, very large needle so that it can be aerodynamic. if you go fast enough maybe you can rise above the atmosphere and start skipping across the surface. keep going faster and faster until you hit escape velocity.
to get into space: use a balloon - heat or helium - to rise the craft as far as possible. then you start moving as fast as you can using a solar power or maybe a rocket. the balloon should be shaped like a very, very large needle so that it can be aerodynamic. if you go fast enough maybe you can rise above the atmosphere and start skipping across the surface. keep going faster and faster until you hit escape velocity.
Tuesday, November 25, 2008
Stack
Stack is a sticky situation. The way it's implemented an application more or less has a fixed stack size (typically 1 MB?) which is not free to be used by anything else, even when most of it is not in use, and neither can it expand its stack size when it runs out.
I understand why it was implemented this way -- to make pushing to and popping from the stack efficient. But I think it could be improved with no loss to performance. Just implement in the CPU architecture a check for each push and pop. If the push would go beyond the stack segment, raise an interrupt that calls an OS function that can then allocate more pages to the stack segment, then continue operation of that thread. And if a pop recedes by a certain amount, you can also raise an interrupt that calls an OS function that may free a page of memory if it's at least x bytes behind the stack pointer. This would obviously require a stack pointer that pushes up and pops down, as opposed the other way around which is only the case anachronistically anyway due to the old programs that put the stack and the other segments in the same memory space.
Realistically, you'd need backward compatibility with software, so you'd have to emulate the backwards stack pointer by starting it off at 0xFFFFFFF on 32-bit systems or 0xFFFFFFFFFFFF on 64-bit even though the memory addressed can't necessarily actually go down to zero without allocating more or raising an exception.
The OS could even start swapping out the uppermost stack data to disk upon interrupt if the pointer goes down too far to allocate more memory, thus giving us, say.. 300 GB of stack space, if'n we ever really need it. And it woudln't have to impinge on speed, because it only does this stuff when there would have been a stack overflow otherwise anyway.
Stack is a sticky situation. The way it's implemented an application more or less has a fixed stack size (typically 1 MB?) which is not free to be used by anything else, even when most of it is not in use, and neither can it expand its stack size when it runs out.
I understand why it was implemented this way -- to make pushing to and popping from the stack efficient. But I think it could be improved with no loss to performance. Just implement in the CPU architecture a check for each push and pop. If the push would go beyond the stack segment, raise an interrupt that calls an OS function that can then allocate more pages to the stack segment, then continue operation of that thread. And if a pop recedes by a certain amount, you can also raise an interrupt that calls an OS function that may free a page of memory if it's at least x bytes behind the stack pointer. This would obviously require a stack pointer that pushes up and pops down, as opposed the other way around which is only the case anachronistically anyway due to the old programs that put the stack and the other segments in the same memory space.
Realistically, you'd need backward compatibility with software, so you'd have to emulate the backwards stack pointer by starting it off at 0xFFFFFFF on 32-bit systems or 0xFFFFFFFFFFFF on 64-bit even though the memory addressed can't necessarily actually go down to zero without allocating more or raising an exception.
The OS could even start swapping out the uppermost stack data to disk upon interrupt if the pointer goes down too far to allocate more memory, thus giving us, say.. 300 GB of stack space, if'n we ever really need it. And it woudln't have to impinge on speed, because it only does this stuff when there would have been a stack overflow otherwise anyway.
Tuesday, August 26, 2008
for electric cars
Electric cars should come with batteries that are easily replaceable. That way you stop at a gas station that supports electric, they take out your spent batteries and replace them with fully charged ones. No waiting.
theteofscuba: using a fork lift?
inhahe: i dont know, i suppose they could design the car to make it easily exchangeable with a machine
theteofscuba: mayb
theteofscuba: the chevy volt uses electricity for u p to 40 miles
theteofscuba: and when it ruins out of batery it uses gas
theteofscuba: but the tesla roadster goes like 400 miles on one charge and the batteries are huge so replacing it doesnt seem feasible
inhahe: if they instated this system the batteries would conform to some particular voltage and size standard
inhahe: so the roadster would just have more of them
Obviously some people might consider it an issue that the gas station may give out a battery that's newer and has a longer lifetime left than the one they receive. The problem is basically what happens when a gas station is stuck with a battery that's used enough that they're obligated to throw it away/recycle it (and there should be a code for this). I think the answer is that the gas station simply foots the cost, and it averages out over all the service they do. After all there's not really opportunity for the motorists to defraud the gas stations in this way, especially if they don't even have to worry about the age of their batteries because of this system being instated. Although not even the gas stations necessarily have to foot the cost per se; they could have deals with the battery companies.
Electric cars should come with batteries that are easily replaceable. That way you stop at a gas station that supports electric, they take out your spent batteries and replace them with fully charged ones. No waiting.
theteofscuba: using a fork lift?
inhahe: i dont know, i suppose they could design the car to make it easily exchangeable with a machine
theteofscuba: mayb
theteofscuba: the chevy volt uses electricity for u p to 40 miles
theteofscuba: and when it ruins out of batery it uses gas
theteofscuba: but the tesla roadster goes like 400 miles on one charge and the batteries are huge so replacing it doesnt seem feasible
inhahe: if they instated this system the batteries would conform to some particular voltage and size standard
inhahe: so the roadster would just have more of them
Obviously some people might consider it an issue that the gas station may give out a battery that's newer and has a longer lifetime left than the one they receive. The problem is basically what happens when a gas station is stuck with a battery that's used enough that they're obligated to throw it away/recycle it (and there should be a code for this). I think the answer is that the gas station simply foots the cost, and it averages out over all the service they do. After all there's not really opportunity for the motorists to defraud the gas stations in this way, especially if they don't even have to worry about the age of their batteries because of this system being instated. Although not even the gas stations necessarily have to foot the cost per se; they could have deals with the battery companies.
Labels:
batteries,
charging,
electric cars,
gas stations,
idea
Friday, August 22, 2008
Faster Harddrives
Why just one head on an actuator arm? Put as many heads as will physically fit in a series, to put different heads on different tracks simultaneously. If you interlace the data right, you'll get throughput increased by a factor of X, where X is the number of heads on an arm. You can also put them in a grid rather than a line, with successive rows offset to each other, to get more in-between tracks. This will work a lot better (or at all) if the range of head motion is perfectly along a radius of the platter (and so would be its row(s) of heads).
Why just one head on an actuator arm? Put as many heads as will physically fit in a series, to put different heads on different tracks simultaneously. If you interlace the data right, you'll get throughput increased by a factor of X, where X is the number of heads on an arm. You can also put them in a grid rather than a line, with successive rows offset to each other, to get more in-between tracks. This will work a lot better (or at all) if the range of head motion is perfectly along a radius of the platter (and so would be its row(s) of heads).
Friday, August 15, 2008
How To See Ghosts
Make 2-dimensional FET array, with a fair amount of resolution. Put it in a Faraday cage with a pinhole in the front to make the FET array work like a pinhole camera, except it'll be one that captures EMF. Now go somewhere where there's haunting activity and start shooting. I read once that someone had captured flying spirits around the pyramids of Egypt using a FET array somehow.
Make 2-dimensional FET array, with a fair amount of resolution. Put it in a Faraday cage with a pinhole in the front to make the FET array work like a pinhole camera, except it'll be one that captures EMF. Now go somewhere where there's haunting activity and start shooting. I read once that someone had captured flying spirits around the pyramids of Egypt using a FET array somehow.
Wireless M/b Form Factor
<myriachromat> i just had a cool idea
<myriachromat> make a new type of motherboard that carries the power through the motherboard so you don't have to have wires (at least not power wires)
<myriachromat> of course you could do the same with data
<myriachromat> i guess i was inspired by this new dell computer that has no wires to the monitor, the thing that holds it carries everything
<myriachromat> you can't really change the standard for power to the devices though or their mounting
<myriachromat> so you'd need the motherboard to have little power supplies and data sticking out
<myriachromat> from the sides
<myriachromat> hmm
<myriachromat> i was thinking you could have it transitional, where new devices are made with both inputs, so that they can be made so that just mounting it gives it power and data if the m/b supports that
<myriachromat> but then if you have any one device that still needs wires
<myriachromat> then you need a traditional power supply and all those wires it has defeats the purpose
<myriachromat> sort of
<myriachromat> unless you were to get a $600 modular one
<tommjames> mine costed about £60
<tommjames> which i guess is about 120 dollars
<myriachromat> ah ok
<myriachromat> i just had a cool idea
<myriachromat> make a new type of motherboard that carries the power through the motherboard so you don't have to have wires (at least not power wires)
<myriachromat> of course you could do the same with data
<myriachromat> i guess i was inspired by this new dell computer that has no wires to the monitor, the thing that holds it carries everything
<myriachromat> you can't really change the standard for power to the devices though or their mounting
<myriachromat> so you'd need the motherboard to have little power supplies and data sticking out
<myriachromat> from the sides
<myriachromat> hmm
<myriachromat> i was thinking you could have it transitional, where new devices are made with both inputs, so that they can be made so that just mounting it gives it power and data if the m/b supports that
<myriachromat> but then if you have any one device that still needs wires
<myriachromat> then you need a traditional power supply and all those wires it has defeats the purpose
<myriachromat> sort of
<myriachromat> unless you were to get a $600 modular one
<tommjames> mine costed about £60
<tommjames> which i guess is about 120 dollars
<myriachromat> ah ok
Labels:
form factor,
motherboard,
power supply,
wireless,
wires
Sunday, August 03, 2008
Web 4.0
Here are some of the languages one currently might use and integrate when making a webpage, and also a few of the things that a browser needs to have added to be able to support the whole world wide web:
(X)HTML
CSS
Javascript
Java
Flash
ActiveX
Silverlight (not extremely popular)
So for the next incarnation of the web, I recommend combining the best features of all of these into one language.
Of course Java, Flash, ActiveX and Silverlight almost serve the same function so I don't have to explain how to integrate those. JavaScript is more integrated with HTML, so integrating that is just a matter of giving our new language access to the DOM. Although there might not be HTML or a DOM. This language will allow much more flexible webpages - more comparable to applications, and applications don't use a DOM. But then who wants to have to create an object for every element of text, so we probably should keep XHTML and the DOM. XHTML will still be its own language; it wouldn't be as readable to implement it as a set of functions within the procedural language, and implementing a procedural language in X(HT)ML would look horrible as well as being a PITA to code by. But there are four different ways we can integrate these two languages as separate languages:
1 is to use a procedural command to output XHTML (defined as a literal string, for the purposes of this question) to the screen. This can be a print statement, perhaps shortened as ? (after BASIC), although it would be more consistent to use the function that changes the contents of a DOM element, as it has to exist anyway for dynamic content. (The top-most DOM element, representing the whole window, would always exist by default.) We could make a print statement that can take a target, although if we do that then one should still be able to change content using a DOM-object-bound function, because those objects will already have a set of functions so that function should be included for completeness, so that makes the print statement somewhat redundant. But it *might* also be more convenient.
2 is to have separate sections of the document for code and XHTML. We still need 1 for dynamic content.
3 is to use a template language. Using a template language could be a pain, because who wants to keep having to escape lines of code? But then we'd still have 1 to work with, because you can't target a specific DOM object using the template way, AFAIK. Well, maybe you can but only by putting the template code within a DOM object so the object would have to be hard-coded into the page.
4 is to use an XHTML tag just like Javascript does. This is like 3 but just more limited. This, too, requires option 1.
None of these options are mutually exclusive.
We could make web 4.0 backward-compatible with Web 1.0 or Web 2.0 (and that would imply Option 2), but then we have sort of defeated the purpose because instead of replacing 7 languages with 1 we have added 1 to make 8. Actually, though, web browsers would most likely support Web 1.0 and 2.0 as well as 4.0 because of the need for transition, so it's just a difference of whether they would support them in the same page or not. The real issue is that if we don't give the new language enough important features, people will still have reason to continue expanding development of, and making new websites in, the old languages, and then we would have defeated our purpose.
CSS and HTML should not quite be separated the way they are. It's a legacy issue. Many functions are available in both CSS and HTML, and some are available in CSS but not HTML. I don't recommend having two different ways to do the same things. I recommend having only HTML attributes, which would include everything that CSS does now, and making CSS simply a way of using those HTML attributes in a way that does what its name suggests: cascading style sheets. I wrote more about this at http://inhahe.nullshells.org/how_css_should_have_been_done.html. One of the features described there - embedding JavaScript in CSS - requires either Option 2, or a special provision in CSS.
One neat thing to do would be to allow this language to be used as a server-side language too. Then we would use some convention in the language for determining which parts are processed on the server (like PHP) and which parts are processed on the client (like JavaScript), but the distinction would be somewhat seamless for the programmer. This feature, BTW, definitely begs for Option 3 above. The use of Option 3 escapes could actually be the sole determiner for whether it's client-side or server-side code, but then you couldn't embed template language code in dynamic content. Also, I have another idea of how to do it..
Security concerns would imply whether or not something is executed client-side or server-side. They obviously do anyway, but I mean automatically instead of manually. Certain objects could be demarcated as "no-read", and then those objects will not be sent to the client. Certain objects could be demarcated as "no-write" and then their value can't be set by the client. An example of a "no-read" object would be someone else's password. An example of a "no-write" object would be an SQL string. We may need to put consideration into what to do about objects and parameters whose values are determined from no-read objects but aren't the objects themselves and about no-write objects and parameters that are determined from other objects, because we don't want to make it easy to create subtle security leaks, but perhaps it would just be a no-brainer on the part of the programmer to make those objects no-read and/or no-write too.
Other considerations need to determine how the code is relegated too, though.
1. CPU bandwidth issues. Is the client likely fast enough? Would the server be too overloaded to process that information for each client?
2. The cost of client<->server communication: the issues of latency and bandwidth. Code could be delineated to minimize object transference. This might perhaps be possible to do automatically - delineate code by tracking information dependincies. But then information that depends on information from both sides is not always determinable because the compiler can't predict program flow and you don't know which variables are going to be asked most for. It's determinable for depencies in loops but not depencies in conditions, and not for groups of loops with different side-of-dependency ratios and dynamic loop counts (this raises Issue 3). This issue could be mitigated though because information gotten from server-side calls comes very fast and information that can only be gotten from the client side is slow and infrequent, because it requires user interaction, so we can always be biased toward running code on the server side where information from both is required.
3. How finely does it granulate its delineation? When does it put one code block on one side and one on the other, vs. when it would stick them together?
So we probably will need to provide a convention for manually determining which side code executes on, and that might make automatic security-based delineation pointless, but not necessarily if the convention is used infrequently.
I would personally recommend a modified version of Python for this unified www language. I may just sound like a Python fan, but it's just that I've noticed that, at every turn, Guido picks the design implementation that's simple enough for novices and yet powerful enough for veterans or even language theorists, facilitates simplicity and consistency, is very consise, and borrows the best features from every other language, all the while being extremely elegant-looking and about as readable as pseudo-code. (And where there's a balance between those features he always seems to know the best middle-ground.)
It's also almost the fastest scripting language, although it seems to be significantly slower than Java and that concerns me because I'm not sure if it could do some things like those fire simulations or ripple-in-water effects that some of those Java applets do. I guess we could always have another Java but with Python-like syntax and language features, or just have RPython.
With all of this seamless server-side and client-side scripting, we may need the server side to implicitly keep its own mirror of the client-side DOM. This would facilitate another feature that I've been thinking about for a web framework, but that could perhaps be made a feature of the language. For web 1.0 clients without JavaScript, we could have a restricted form of AJAX *automatically* fall back to page-based browsing, so you don't have to code your website in both AJAX and Web 1.0. Basically your code would add an event handler for when a particular object is clicked, which makes changes to the DOM and then calls submit(), or we could make submit() implicit with the event handler's return. If the server is serving a non-JS browser, the clickable things in question will be sent as regular HTML buttons or links, and the changes will be made to the server's DOM and then upon submit() the entire page will be sent with its new modifications. If the server is serving a JS-enabled browser (not web 4.0), buttons or links will be sent with JavaScript events associated with them in which XMLHTTPRequests are sent (unless no server interaction is necessary) and the page elements are modified locally. Some types of script could be included, such as roll-overs, that aren't crucial to the user's understanding or navigation, that would be automatically ignored (by the server) for non-JS users, so that not every sacrifice has to be made to have fall-back for the non-JS users. But some types of interaction would still have to be illegal. Or perhaps those restrictions don't have to exist in the language; perhaps it can just ignore all dynamic content that's not in the right form, but the coder should still take heed of those restrictions if he's using the fall-back feature, or else the page just won't make sense to non-JS users. It would be better to do it that way, if possible, because otherwise some pieces of code might not be allowed that actually wouldn't pose a real problem.
One feature this language definitely needs to have is the ability to download parts of code, widgets, dialog boxes, etc. to the client on-demand, but in a way that's easy for the web developer, perhaps even automatic. The point is that if you load a very complicated and customized interactive page, you shouldn't have to wait a long time for it to load, because that would just put people off of Web 4.0.
BTW, The reason I call it Web 4.0 is that I already have Web 3.0 reserved; it's the idea to have Java applets have this load-on-demand feature and for websites to not only be dynamic, but to use Java to behave just like regular applications, with all the freedoms implied (except for the security restrictions, obviously).
Here are some of the languages one currently might use and integrate when making a webpage, and also a few of the things that a browser needs to have added to be able to support the whole world wide web:
(X)HTML
CSS
Javascript
Java
Flash
ActiveX
Silverlight (not extremely popular)
So for the next incarnation of the web, I recommend combining the best features of all of these into one language.
Of course Java, Flash, ActiveX and Silverlight almost serve the same function so I don't have to explain how to integrate those. JavaScript is more integrated with HTML, so integrating that is just a matter of giving our new language access to the DOM. Although there might not be HTML or a DOM. This language will allow much more flexible webpages - more comparable to applications, and applications don't use a DOM. But then who wants to have to create an object for every element of text, so we probably should keep XHTML and the DOM. XHTML will still be its own language; it wouldn't be as readable to implement it as a set of functions within the procedural language, and implementing a procedural language in X(HT)ML would look horrible as well as being a PITA to code by. But there are four different ways we can integrate these two languages as separate languages:
1 is to use a procedural command to output XHTML (defined as a literal string, for the purposes of this question) to the screen. This can be a print statement, perhaps shortened as ? (after BASIC), although it would be more consistent to use the function that changes the contents of a DOM element, as it has to exist anyway for dynamic content. (The top-most DOM element, representing the whole window, would always exist by default.) We could make a print statement that can take a target, although if we do that then one should still be able to change content using a DOM-object-bound function, because those objects will already have a set of functions so that function should be included for completeness, so that makes the print statement somewhat redundant. But it *might* also be more convenient.
2 is to have separate sections of the document for code and XHTML. We still need 1 for dynamic content.
3 is to use a template language. Using a template language could be a pain, because who wants to keep having to escape lines of code? But then we'd still have 1 to work with, because you can't target a specific DOM object using the template way, AFAIK. Well, maybe you can but only by putting the template code within a DOM object so the object would have to be hard-coded into the page.
4 is to use an XHTML tag just like Javascript does. This is like 3 but just more limited. This, too, requires option 1.
None of these options are mutually exclusive.
We could make web 4.0 backward-compatible with Web 1.0 or Web 2.0 (and that would imply Option 2), but then we have sort of defeated the purpose because instead of replacing 7 languages with 1 we have added 1 to make 8. Actually, though, web browsers would most likely support Web 1.0 and 2.0 as well as 4.0 because of the need for transition, so it's just a difference of whether they would support them in the same page or not. The real issue is that if we don't give the new language enough important features, people will still have reason to continue expanding development of, and making new websites in, the old languages, and then we would have defeated our purpose.
CSS and HTML should not quite be separated the way they are. It's a legacy issue. Many functions are available in both CSS and HTML, and some are available in CSS but not HTML. I don't recommend having two different ways to do the same things. I recommend having only HTML attributes, which would include everything that CSS does now, and making CSS simply a way of using those HTML attributes in a way that does what its name suggests: cascading style sheets. I wrote more about this at http://inhahe.nullshells.org/how_css_should_have_been_done.html. One of the features described there - embedding JavaScript in CSS - requires either Option 2, or a special provision in CSS.
One neat thing to do would be to allow this language to be used as a server-side language too. Then we would use some convention in the language for determining which parts are processed on the server (like PHP) and which parts are processed on the client (like JavaScript), but the distinction would be somewhat seamless for the programmer. This feature, BTW, definitely begs for Option 3 above. The use of Option 3 escapes could actually be the sole determiner for whether it's client-side or server-side code, but then you couldn't embed template language code in dynamic content. Also, I have another idea of how to do it..
Security concerns would imply whether or not something is executed client-side or server-side. They obviously do anyway, but I mean automatically instead of manually. Certain objects could be demarcated as "no-read", and then those objects will not be sent to the client. Certain objects could be demarcated as "no-write" and then their value can't be set by the client. An example of a "no-read" object would be someone else's password. An example of a "no-write" object would be an SQL string. We may need to put consideration into what to do about objects and parameters whose values are determined from no-read objects but aren't the objects themselves and about no-write objects and parameters that are determined from other objects, because we don't want to make it easy to create subtle security leaks, but perhaps it would just be a no-brainer on the part of the programmer to make those objects no-read and/or no-write too.
Other considerations need to determine how the code is relegated too, though.
1. CPU bandwidth issues. Is the client likely fast enough? Would the server be too overloaded to process that information for each client?
2. The cost of client<->server communication: the issues of latency and bandwidth. Code could be delineated to minimize object transference. This might perhaps be possible to do automatically - delineate code by tracking information dependincies. But then information that depends on information from both sides is not always determinable because the compiler can't predict program flow and you don't know which variables are going to be asked most for. It's determinable for depencies in loops but not depencies in conditions, and not for groups of loops with different side-of-dependency ratios and dynamic loop counts (this raises Issue 3). This issue could be mitigated though because information gotten from server-side calls comes very fast and information that can only be gotten from the client side is slow and infrequent, because it requires user interaction, so we can always be biased toward running code on the server side where information from both is required.
3. How finely does it granulate its delineation? When does it put one code block on one side and one on the other, vs. when it would stick them together?
So we probably will need to provide a convention for manually determining which side code executes on, and that might make automatic security-based delineation pointless, but not necessarily if the convention is used infrequently.
I would personally recommend a modified version of Python for this unified www language. I may just sound like a Python fan, but it's just that I've noticed that, at every turn, Guido picks the design implementation that's simple enough for novices and yet powerful enough for veterans or even language theorists, facilitates simplicity and consistency, is very consise, and borrows the best features from every other language, all the while being extremely elegant-looking and about as readable as pseudo-code. (And where there's a balance between those features he always seems to know the best middle-ground.)
It's also almost the fastest scripting language, although it seems to be significantly slower than Java and that concerns me because I'm not sure if it could do some things like those fire simulations or ripple-in-water effects that some of those Java applets do. I guess we could always have another Java but with Python-like syntax and language features, or just have RPython.
With all of this seamless server-side and client-side scripting, we may need the server side to implicitly keep its own mirror of the client-side DOM. This would facilitate another feature that I've been thinking about for a web framework, but that could perhaps be made a feature of the language. For web 1.0 clients without JavaScript, we could have a restricted form of AJAX *automatically* fall back to page-based browsing, so you don't have to code your website in both AJAX and Web 1.0. Basically your code would add an event handler for when a particular object is clicked, which makes changes to the DOM and then calls submit(), or we could make submit() implicit with the event handler's return. If the server is serving a non-JS browser, the clickable things in question will be sent as regular HTML buttons or links, and the changes will be made to the server's DOM and then upon submit() the entire page will be sent with its new modifications. If the server is serving a JS-enabled browser (not web 4.0), buttons or links will be sent with JavaScript events associated with them in which XMLHTTPRequests are sent (unless no server interaction is necessary) and the page elements are modified locally. Some types of script could be included, such as roll-overs, that aren't crucial to the user's understanding or navigation, that would be automatically ignored (by the server) for non-JS users, so that not every sacrifice has to be made to have fall-back for the non-JS users. But some types of interaction would still have to be illegal. Or perhaps those restrictions don't have to exist in the language; perhaps it can just ignore all dynamic content that's not in the right form, but the coder should still take heed of those restrictions if he's using the fall-back feature, or else the page just won't make sense to non-JS users. It would be better to do it that way, if possible, because otherwise some pieces of code might not be allowed that actually wouldn't pose a real problem.
One feature this language definitely needs to have is the ability to download parts of code, widgets, dialog boxes, etc. to the client on-demand, but in a way that's easy for the web developer, perhaps even automatic. The point is that if you load a very complicated and customized interactive page, you shouldn't have to wait a long time for it to load, because that would just put people off of Web 4.0.
BTW, The reason I call it Web 4.0 is that I already have Web 3.0 reserved; it's the idea to have Java applets have this load-on-demand feature and for websites to not only be dynamic, but to use Java to behave just like regular applications, with all the freedoms implied (except for the security restrictions, obviously).
Friday, August 01, 2008
Easier registration and login
For practically every forum or other kind of social website that I use I have to first register, putting in my name, e-mail address, password, etc. etc. And then when I go back to it later I have to provide username and login information. There are cookies, but they don't help if you login months later. I have to enter the same information for all of these sites. Wouldn't it be nice if they could just detect my information for registration and login?
My idea is to have a personal file including all the personal information you're willing to give out (or perhaps some items that you're willing to give out with user confirmation). The website is able to access this information, but only the fields it requests; that way the user doesn't have to worry about confirming information that the website doesn't need. This could also be used to eliminate passwords. I use the same password for every site, so why have to type it in? Some people prefer to use a different password on every site, but then they have to keep track of all their passwords. You can get the security of the latter with the convenience of the former by using PGP. The user may still want to define a password, though, so that they can logon when they're not at their computer which possesses the private key. Perhaps the information file could store a default username and password. Either way the user should be able to specify a site-specific username and/or password.
How should the protocol be implemented?
Perhaps it could be a browser add-on or capability, implemented as a JavaScript function. We should not go arbitrarily (browser-specifically) adding JavaScript functions which websites may require, though; a website supporting this protocol should, and most likely will, allow the user to input information manually if such function does not exist.
For practically every forum or other kind of social website that I use I have to first register, putting in my name, e-mail address, password, etc. etc. And then when I go back to it later I have to provide username and login information. There are cookies, but they don't help if you login months later. I have to enter the same information for all of these sites. Wouldn't it be nice if they could just detect my information for registration and login?
My idea is to have a personal file including all the personal information you're willing to give out (or perhaps some items that you're willing to give out with user confirmation). The website is able to access this information, but only the fields it requests; that way the user doesn't have to worry about confirming information that the website doesn't need. This could also be used to eliminate passwords. I use the same password for every site, so why have to type it in? Some people prefer to use a different password on every site, but then they have to keep track of all their passwords. You can get the security of the latter with the convenience of the former by using PGP. The user may still want to define a password, though, so that they can logon when they're not at their computer which possesses the private key. Perhaps the information file could store a default username and password. Either way the user should be able to specify a site-specific username and/or password.
How should the protocol be implemented?
Perhaps it could be a browser add-on or capability, implemented as a JavaScript function. We should not go arbitrarily (browser-specifically) adding JavaScript functions which websites may require, though; a website supporting this protocol should, and most likely will, allow the user to input information manually if such function does not exist.
Thursday, July 31, 2008
A better voting system
Here are two paradoxes in voting:
1. You want your vote to count, so you vote for a candidate you think might have a chance to win, which means one of the most popular candidates. Everyone else is also voting according to that philosophy, which means that popularity itself becomes self-magnifying. It gives certain meaning to the statement, "a celebrity is a person who's well known for how popular they are." This serves to: a) give the underdog candidates even *less* of a chance in hell to win, and b) magnify the problem of mere campaign funds determining how popular a candidate is..because campaign funds affect how well recognized a candidate is to start with, and then from there we merely magnify that value. And the funny thing is, your vote's not going to change who becomes president anyway, so you might as well vote for the candidate you like.
2. Let's say you have a republican candidate, let's call him Kodos, and a democratic candidate, let's call him Kang. The voters nearly equally like Kodos and Kang on average, but there's another, independent candidate, let's call him Ralph Nader. The problem here is that the voters who like Kang more also like Ralph Nader. Some of them vote for Nader instead, which means Kodos wins. Why is that a problem? Consider this possibility: 60% of voters want either Kang or Ralph Nader, and don't want Kodos. 40% of voters want only Kodos. The just solution? Give them Kang or Nader (obviously Kang, because he was more popular of the two.) The actual result? Kodos (because, e.g., 35% voted for Kang, and 25% voted for Nader).
There is a single solution to both of these problems.
Make voting a rating system. You get to give each candidate a percentage of preferability, adding up to 100, or at least an order of rankings, and a sophisticated algorithm determines what outcome would satisfy the most people. I'm not sure what this algorithm is..perhaps something like one of the chess rating systems, such as the Glicko system, adapted for this purpose. Chess ratings are based on who beats whom, so in our adaptation candidate X beats candidate Y every time X comes before Y in anyone's list. Whatever the best algorithm is, it might even make the primary elections completely unnecessary, since, e.g., one democrat wouldn't 'take votes' from another democrat.
Alternatively, we could simply use a system where we vote for more than one candidate. No ratings, just put a check mark next to each candidate you like. Or, perhaps, yes's for ones you like and no's for ones you dislike. The latter option seems a little less positive psychologically, mostly since the next president would likely enough be someone you had no'd..however, it seems to be necessary given the two-party system. For example, without no's, most democrats would check every democrat, and most republicans would check every republican. That leaves very little to determine which democrat wins or, alternatively, which republican. If there are no's, however, one could, for example, 'yes' Hillary, leave Obama neutral, and 'no' McCain. This is probably not a big issue, though, if the primaries remain instated.
I just noticed this useful comment -- thanks Sylvain
"Why reinvent the wheel. It is well-known to mathematicians since long ago, that the best voting system to avoid strategic biases, is the Condorcet method."
Here are two paradoxes in voting:
1. You want your vote to count, so you vote for a candidate you think might have a chance to win, which means one of the most popular candidates. Everyone else is also voting according to that philosophy, which means that popularity itself becomes self-magnifying. It gives certain meaning to the statement, "a celebrity is a person who's well known for how popular they are." This serves to: a) give the underdog candidates even *less* of a chance in hell to win, and b) magnify the problem of mere campaign funds determining how popular a candidate is..because campaign funds affect how well recognized a candidate is to start with, and then from there we merely magnify that value. And the funny thing is, your vote's not going to change who becomes president anyway, so you might as well vote for the candidate you like.
2. Let's say you have a republican candidate, let's call him Kodos, and a democratic candidate, let's call him Kang. The voters nearly equally like Kodos and Kang on average, but there's another, independent candidate, let's call him Ralph Nader. The problem here is that the voters who like Kang more also like Ralph Nader. Some of them vote for Nader instead, which means Kodos wins. Why is that a problem? Consider this possibility: 60% of voters want either Kang or Ralph Nader, and don't want Kodos. 40% of voters want only Kodos. The just solution? Give them Kang or Nader (obviously Kang, because he was more popular of the two.) The actual result? Kodos (because, e.g., 35% voted for Kang, and 25% voted for Nader).
There is a single solution to both of these problems.
Make voting a rating system. You get to give each candidate a percentage of preferability, adding up to 100, or at least an order of rankings, and a sophisticated algorithm determines what outcome would satisfy the most people. I'm not sure what this algorithm is..perhaps something like one of the chess rating systems, such as the Glicko system, adapted for this purpose. Chess ratings are based on who beats whom, so in our adaptation candidate X beats candidate Y every time X comes before Y in anyone's list. Whatever the best algorithm is, it might even make the primary elections completely unnecessary, since, e.g., one democrat wouldn't 'take votes' from another democrat.
Alternatively, we could simply use a system where we vote for more than one candidate. No ratings, just put a check mark next to each candidate you like. Or, perhaps, yes's for ones you like and no's for ones you dislike. The latter option seems a little less positive psychologically, mostly since the next president would likely enough be someone you had no'd..however, it seems to be necessary given the two-party system. For example, without no's, most democrats would check every democrat, and most republicans would check every republican. That leaves very little to determine which democrat wins or, alternatively, which republican. If there are no's, however, one could, for example, 'yes' Hillary, leave Obama neutral, and 'no' McCain. This is probably not a big issue, though, if the primaries remain instated.
I just noticed this useful comment -- thanks Sylvain
"Why reinvent the wheel. It is well-known to mathematicians since long ago, that the best voting system to avoid strategic biases, is the Condorcet method."
Wednesday, July 30, 2008
An optimal backup+mirroring system.
Backing up to something other than a harddrive has the disadvantage that you can't backup everything up-to-the-minute.
Backing up using RAID 1 or another form of mirroring has the disadvantage that it reduces the amount of logical harddrive space to a fraction of its physical combined size.
You could get the best of both worlds by:
a) backing up new data to DVD-R or maybe DVD-RW periodically, for example once a day
b) mirroring what *isn't* yet backed up on other harddrive(s)
The filesystem would keep track of everything that's backed up and automatically mirror either sectors or files that are added or changed, until they're backed up. The mirrored data would exist as separate files, or one large file, on the (partially) mirrored harddrive(s) so that it can grow and shrink and not need the entire harddrive or even a fixed-sized partition on it.
If the mirroring works by sectors then the one large file could have an allocation table of sectors and, since they're all the same size, when one is removed the last sector in the file can be copied to its location and then deleted from the end. For efficiency implemented on the file system level the sectors being mirrored could sit squarely on sectors in the file system they're being mirrored on.
In the above case you don't even have to copy a sector from the end when one is deleted: just change the info in the FS of which sectors the file uses. But the kind of file system fragmentation that would cause might defeat the purpose.
If, instead of files or sectors, the mirroring works on arbitrarily small chunks of files, then the one large file could be a database, which should be periodically compressed.
Partially mirrored harddrives should be able to be any harddrive whether it's on the same system or on any other system on the network, such as with Chiron FS.
Each system used as a mirror should also be able to have its own data mirrored, except of course for the data that's already acting as a mirror. That way, for example, if its harddrive crashes you won't have to reinstall the OS: just sync a new harddrive with a mirror (and/or DVD backup) and then install it.
I'm not sure how incremental DVD backups are usually handled, so that you don't have to go back 10 years and put in 100 different DVDs in series to reconstruct the original data, but I'm sure this question has already been handled. But if not, I have some ideas.
1) when possible, erase the entire DVD and rewrite it. Make sure you don't rewrite a DVD too many times so that it doesn't work. Have the software keep track of how many times it's been rewritten. Also use multisession for incremental changes. If used in combination with erasing, a multisession DVD wouldn't have to be erased until it gets full due to incremental additions.
2) Have a limit to how many backup DVDs a given filesystem state can depend on. The software should know exactly what's on what DVD so it can automatically enforce this limit. When it's too many, it can start over from the beginning and backup to new DVDs, or better, erase the old DVDs and write over them. OR it can only rewrite just enough to keep the number of backup DVDs below the limit.
--i've been informed that tape backup is better than dvd for large scale servers. so replace dvd with tape backup in the above. and remove 1 and 2. unless dvd is still an economical solution for small scale.
Backing up to something other than a harddrive has the disadvantage that you can't backup everything up-to-the-minute.
Backing up using RAID 1 or another form of mirroring has the disadvantage that it reduces the amount of logical harddrive space to a fraction of its physical combined size.
You could get the best of both worlds by:
a) backing up new data to DVD-R or maybe DVD-RW periodically, for example once a day
b) mirroring what *isn't* yet backed up on other harddrive(s)
The filesystem would keep track of everything that's backed up and automatically mirror either sectors or files that are added or changed, until they're backed up. The mirrored data would exist as separate files, or one large file, on the (partially) mirrored harddrive(s) so that it can grow and shrink and not need the entire harddrive or even a fixed-sized partition on it.
If the mirroring works by sectors then the one large file could have an allocation table of sectors and, since they're all the same size, when one is removed the last sector in the file can be copied to its location and then deleted from the end. For efficiency implemented on the file system level the sectors being mirrored could sit squarely on sectors in the file system they're being mirrored on.
In the above case you don't even have to copy a sector from the end when one is deleted: just change the info in the FS of which sectors the file uses. But the kind of file system fragmentation that would cause might defeat the purpose.
If, instead of files or sectors, the mirroring works on arbitrarily small chunks of files, then the one large file could be a database, which should be periodically compressed.
Partially mirrored harddrives should be able to be any harddrive whether it's on the same system or on any other system on the network, such as with Chiron FS.
Each system used as a mirror should also be able to have its own data mirrored, except of course for the data that's already acting as a mirror. That way, for example, if its harddrive crashes you won't have to reinstall the OS: just sync a new harddrive with a mirror (and/or DVD backup) and then install it.
I'm not sure how incremental DVD backups are usually handled, so that you don't have to go back 10 years and put in 100 different DVDs in series to reconstruct the original data, but I'm sure this question has already been handled. But if not, I have some ideas.
1) when possible, erase the entire DVD and rewrite it. Make sure you don't rewrite a DVD too many times so that it doesn't work. Have the software keep track of how many times it's been rewritten. Also use multisession for incremental changes. If used in combination with erasing, a multisession DVD wouldn't have to be erased until it gets full due to incremental additions.
2) Have a limit to how many backup DVDs a given filesystem state can depend on. The software should know exactly what's on what DVD so it can automatically enforce this limit. When it's too many, it can start over from the beginning and backup to new DVDs, or better, erase the old DVDs and write over them. OR it can only rewrite just enough to keep the number of backup DVDs below the limit.
--i've been informed that tape backup is better than dvd for large scale servers. so replace dvd with tape backup in the above. and remove 1 and 2. unless dvd is still an economical solution for small scale.
Solve world strife through understanding
Description 1
Have a project where you interview all the world's leaders, especially heads of state, to get their philosophy on life.. morals, religion, politics.. Make them available on the website. Follow current events and keep people posted with the website's blog. Every event will be presented interpreted according to the involved parties' philosophies. It would be best to write this neutrally, regardless of how immature someone's point of view may be.
Events should also include hyperlinks to related events in the blog, and there could also be a visual diagram showing titles of all the events (which link to the blogs themselves) and the links between them. Countries involved could be signified in the diagram elements by their respective flags. Perhaps events could be laid out in (roughly?) chronological order on the vertical or horizontal axis. Or if that would detract too much from the network's optimal layout then just color according to date or recentness.
Description 2
hold interviews with various world leaders, that is, anyone with weight, which includes politicians, terrorists, and businessmen. try to eliminate as much superfice as possible. be as direct and honest as you can get /them/ to be. find out what they *really* think and believe. behind any belief system or justification is a simple psychological reason, just like what how people say different things when they're drunk. find out not only this, but this in relation to the decisions that they make that affect the country or the world at large, and how that country (or terrorist organization, or religion, or business, etc.) relates to other countries/entities, be it peace, war, trade, embargos, strife, amory, or otherwise. i imagine a static base of deeply delving interviews into the philosophies and political positions (and perhaps goals) of these people, combined with a constant news source that reports earth-shaking events as they relate to the interviews/philosophies of the active parties. events related to strife, war, exploitation, etc. are probably the most important to cover.
Description 1
Have a project where you interview all the world's leaders, especially heads of state, to get their philosophy on life.. morals, religion, politics.. Make them available on the website. Follow current events and keep people posted with the website's blog. Every event will be presented interpreted according to the involved parties' philosophies. It would be best to write this neutrally, regardless of how immature someone's point of view may be.
Events should also include hyperlinks to related events in the blog, and there could also be a visual diagram showing titles of all the events (which link to the blogs themselves) and the links between them. Countries involved could be signified in the diagram elements by their respective flags. Perhaps events could be laid out in (roughly?) chronological order on the vertical or horizontal axis. Or if that would detract too much from the network's optimal layout then just color according to date or recentness.
Description 2
hold interviews with various world leaders, that is, anyone with weight, which includes politicians, terrorists, and businessmen. try to eliminate as much superfice as possible. be as direct and honest as you can get /them/ to be. find out what they *really* think and believe. behind any belief system or justification is a simple psychological reason, just like what how people say different things when they're drunk. find out not only this, but this in relation to the decisions that they make that affect the country or the world at large, and how that country (or terrorist organization, or religion, or business, etc.) relates to other countries/entities, be it peace, war, trade, embargos, strife, amory, or otherwise. i imagine a static base of deeply delving interviews into the philosophies and political positions (and perhaps goals) of these people, combined with a constant news source that reports earth-shaking events as they relate to the interviews/philosophies of the active parties. events related to strife, war, exploitation, etc. are probably the most important to cover.
The Pulse of America
A website for issues that will be voted on by congress or decided by governors, mayors, etc. For bills, it would automatically list all the ones that need to be decided on, from usa.gov. For issues only affecting only legislation in a particular area, perhaps only people from that area would be allowed to vote. With each issue could be an explanation of what factors need to be considered, preferably posted by congress members or mayors, etc. Also, for bills, it would be nice if congress members posted summaries, including the gotchas that are tacked on just because they can. Bills don't seem that easy to read.
This doesn't have to be a government-supported website, because the idea isn't that the voters legally or necessarily determine the outcome--it's for politicians to peruse results and feedback at their liesure. Voters should be able to attach explanations about their feelings/opinions along with their votes. Also have the ability for users to post their own bills or other suggestions for changes in legislation or budget, vote for user proposals, or create new branch versions of existing proposals/bills. Attached explanations should be rateable so that it's easy to see which sentiments have a lot of support and are well-presented. User proposals/branches should also be rateable.
A website for issues that will be voted on by congress or decided by governors, mayors, etc. For bills, it would automatically list all the ones that need to be decided on, from usa.gov. For issues only affecting only legislation in a particular area, perhaps only people from that area would be allowed to vote. With each issue could be an explanation of what factors need to be considered, preferably posted by congress members or mayors, etc. Also, for bills, it would be nice if congress members posted summaries, including the gotchas that are tacked on just because they can. Bills don't seem that easy to read.
This doesn't have to be a government-supported website, because the idea isn't that the voters legally or necessarily determine the outcome--it's for politicians to peruse results and feedback at their liesure. Voters should be able to attach explanations about their feelings/opinions along with their votes. Also have the ability for users to post their own bills or other suggestions for changes in legislation or budget, vote for user proposals, or create new branch versions of existing proposals/bills. Attached explanations should be rateable so that it's easy to see which sentiments have a lot of support and are well-presented. User proposals/branches should also be rateable.
Water Purification
For efficient water purification: Electrocute water to extract the H and O. Perhaps you'd need to separate the H and O somehow from anything in the water that may have risen up with vapor, such as by using a centrifuge, a microscopically fine filter, or electrostatic filters (one for H and one for O, so that you can't ignite it). Pipe the H and O to another area and burn it, using as much of the heat from that as possible to fuel the electricity generation. Also mechanically allow the gas volume lost from that process to balance out the gas volume gained from extracting the H and O, so that you're not working against atmospheric pressure. Or, just do both processes completely in vacuo. Collect the vapor (or water, if you extracted the heat that well) created from the combustion process, condense it (if necessary) using a heat sink, and distribute it.
The key points here are to a) use the heat from the burning process, and b) not work against atmospheric pressure, if that's also a big issue. If this system is implemented ideally, it will operate at near-100% efficiency and take very little external power, and it requires no chemicals except for whatever you want to add before distribution (such as chlorine and/or fluoride).
This system can even work on *salt water*, and perhaps even sewer water. Although that raises questions of what to do with the residue. If a little bit of the water is left unextracted, you can simply let it flow through continuously to pass most of the residue, and then the only question left would be one of periodically cleaning / replacing the cathodes and anodes.
For efficient water purification: Electrocute water to extract the H and O. Perhaps you'd need to separate the H and O somehow from anything in the water that may have risen up with vapor, such as by using a centrifuge, a microscopically fine filter, or electrostatic filters (one for H and one for O, so that you can't ignite it). Pipe the H and O to another area and burn it, using as much of the heat from that as possible to fuel the electricity generation. Also mechanically allow the gas volume lost from that process to balance out the gas volume gained from extracting the H and O, so that you're not working against atmospheric pressure. Or, just do both processes completely in vacuo. Collect the vapor (or water, if you extracted the heat that well) created from the combustion process, condense it (if necessary) using a heat sink, and distribute it.
The key points here are to a) use the heat from the burning process, and b) not work against atmospheric pressure, if that's also a big issue. If this system is implemented ideally, it will operate at near-100% efficiency and take very little external power, and it requires no chemicals except for whatever you want to add before distribution (such as chlorine and/or fluoride).
This system can even work on *salt water*, and perhaps even sewer water. Although that raises questions of what to do with the residue. If a little bit of the water is left unextracted, you can simply let it flow through continuously to pass most of the residue, and then the only question left would be one of periodically cleaning / replacing the cathodes and anodes.
Space Elevator
way to have a rope (buckytube or whatever) to space:
have a massive foundation, anchored in bedrock, attached to the bottom of the rope at the earth's equator. have a satellite attached to the top of the rope, somewhat further (or perhaps way further) than a geostationary orbit, orbiting obviously coplanar with the Earth's equator. having the satellite farther than a geostationary orbit will cause the rope to pull it along faster than it would otherwise orbit, thus keeping the rope up (including the weight of whatever is currently elevating up the rope) by centrifugal force, just like a sling.
the longer the rope is, the less massive the satellite has to be, though the rope itself may be very expensive so the most economical length could be minimal (Clarke belt), maximal (the length at which you need no satellite), or some specific length in between.
way to have a rope (buckytube or whatever) to space:
have a massive foundation, anchored in bedrock, attached to the bottom of the rope at the earth's equator. have a satellite attached to the top of the rope, somewhat further (or perhaps way further) than a geostationary orbit, orbiting obviously coplanar with the Earth's equator. having the satellite farther than a geostationary orbit will cause the rope to pull it along faster than it would otherwise orbit, thus keeping the rope up (including the weight of whatever is currently elevating up the rope) by centrifugal force, just like a sling.
the longer the rope is, the less massive the satellite has to be, though the rope itself may be very expensive so the most economical length could be minimal (Clarke belt), maximal (the length at which you need no satellite), or some specific length in between.
Datamining for Health
Users submit to the website lists of everything they eat everyday. This could be a tedious process, so the idea would be more viable if users are allowed to just send in receipts for everything they buy (food-wise), or fax them or scan and then e-mail them. That method would be imperfect, but still valuable since the whole system is statistical anyway. Doing that would work especially well with people living alone, but for families the system is aware that the receipts applies to a family unit.
Whenever somebody in the family of a user (or a user living alone) has a health issue, they report it to the website, and they can even report casual results of their physicals, blood tests, or pulse blood pressure and weight when they have these things done, or even answer questionnaires about subjective measures such as energy levels, concentration and general happiness.. users can do those things once, periodically, spuriously or not at all, if they so desire. but they *are* obliged to report any clinical health problems that may arise.
The website uses all this data to perform data mining and draw correlations between foods people eat (primarily on the level of granularity of their ingredients) and health issues, as well as any aspects of health statuses or changes thereof at all, whether negative or positive. This study should include pharmaceuticals along with food -- anything that's injested, really -- so users should also report their medications and over-the-counter stuff.
The results of a study like this could be invaluable, in fact, they would show in black and white all the things that 'til now the experts can only speculate about, including the damages caused by the things we're being sold as food.
Oh, the timespans involved in finding correlations between cause and effect would be decades, i.e., the project would run for decades; hopefully some users would submit for decades, or at least decades apart; so for example, if partially hydrogenated oils (or even aspartame) cause multiple sclerosis after 40 years of consumption, this project will find out.
Users submit to the website lists of everything they eat everyday. This could be a tedious process, so the idea would be more viable if users are allowed to just send in receipts for everything they buy (food-wise), or fax them or scan and then e-mail them. That method would be imperfect, but still valuable since the whole system is statistical anyway. Doing that would work especially well with people living alone, but for families the system is aware that the receipts applies to a family unit.
Whenever somebody in the family of a user (or a user living alone) has a health issue, they report it to the website, and they can even report casual results of their physicals, blood tests, or pulse blood pressure and weight when they have these things done, or even answer questionnaires about subjective measures such as energy levels, concentration and general happiness.. users can do those things once, periodically, spuriously or not at all, if they so desire. but they *are* obliged to report any clinical health problems that may arise.
The website uses all this data to perform data mining and draw correlations between foods people eat (primarily on the level of granularity of their ingredients) and health issues, as well as any aspects of health statuses or changes thereof at all, whether negative or positive. This study should include pharmaceuticals along with food -- anything that's injested, really -- so users should also report their medications and over-the-counter stuff.
The results of a study like this could be invaluable, in fact, they would show in black and white all the things that 'til now the experts can only speculate about, including the damages caused by the things we're being sold as food.
Oh, the timespans involved in finding correlations between cause and effect would be decades, i.e., the project would run for decades; hopefully some users would submit for decades, or at least decades apart; so for example, if partially hydrogenated oils (or even aspartame) cause multiple sclerosis after 40 years of consumption, this project will find out.
Determine the best keyboard layout
application that takes logs of keystrokes for many users and calculates an optimal keyboard layout for that dataset. it would take into account the basic mechanics of typing such as
-it's easier to alternate hands than use the same hand in succession
-letters right above/below eachother take longer to type in succession because the finger has to move
-is it easier to type keys in the top row than the bottom row or vice versa?
keyboards could be developed for
-the average user
-the average C++ programmer
-the average programmer in other languages
-etc.
the mechanics of typing rules could alternatively be automatically inferred by analyzing the timing of key presses with any given pre-existing keyboard layout. times between key presses greater than a small fraction of a second would obviously be ignored.
application that takes logs of keystrokes for many users and calculates an optimal keyboard layout for that dataset. it would take into account the basic mechanics of typing such as
-it's easier to alternate hands than use the same hand in succession
-letters right above/below eachother take longer to type in succession because the finger has to move
-is it easier to type keys in the top row than the bottom row or vice versa?
keyboards could be developed for
-the average user
-the average C++ programmer
-the average programmer in other languages
-etc.
the mechanics of typing rules could alternatively be automatically inferred by analyzing the timing of key presses with any given pre-existing keyboard layout. times between key presses greater than a small fraction of a second would obviously be ignored.
LED ideas
Instead of having many conventional LEDs to make up a bright light, why not just make one LED where the actual diode part of it is very fat, long, or multi-stranded?
also instead of wasting 50% of the light because of refraction within the plastic, make the actual light emitting part like the filament of a regular bulb. don't touch it with anything, just surround it with a vacuum and a bulb.
Instead of having many conventional LEDs to make up a bright light, why not just make one LED where the actual diode part of it is very fat, long, or multi-stranded?
also instead of wasting 50% of the light because of refraction within the plastic, make the actual light emitting part like the filament of a regular bulb. don't touch it with anything, just surround it with a vacuum and a bulb.
Freaky toy idea
A toy that's shaped kind of like a rake (but very small) and without the handle. You wind it up and as it unwinds, a helix or other shape inside that's connected to all the inner prong ends that spins causes the outer ends to move in a pattern that makes the toy 'crawl' forward... very quickly. No wheels, just rake-like prongs in front and a smooth curve on back to slide on the floor.
A toy that's shaped kind of like a rake (but very small) and without the handle. You wind it up and as it unwinds, a helix or other shape inside that's connected to all the inner prong ends that spins causes the outer ends to move in a pattern that makes the toy 'crawl' forward... very quickly. No wheels, just rake-like prongs in front and a smooth curve on back to slide on the floor.
A better keyboard
Make a keyboard that detects slight movements in fingers, so all you have to do is twitch, basically. Make an impulse in the direction that your finger would normally have to move to press a given key. Perhaps use technology related to that used in a pointing stick, to detect twitches. (let it also detect down-pressure)
Of course this alone couldn't account for all the keys on a keyboard. Some keys you would just have to move your finger for.
Who knows how fast people could type with this..
Make a keyboard that detects slight movements in fingers, so all you have to do is twitch, basically. Make an impulse in the direction that your finger would normally have to move to press a given key. Perhaps use technology related to that used in a pointing stick, to detect twitches. (let it also detect down-pressure)
Of course this alone couldn't account for all the keys on a keyboard. Some keys you would just have to move your finger for.
Who knows how fast people could type with this..
Floating aerogel
Aerogel is very light and mostly air, but very strong. You probably need to include some amount of gas to make aerogel, but if you could make it in a really low-pressure environment, and perhaps even with a light gas, like hydrogen, perhaps when exposed to normal atmospheric pressure it will retain its structure and be light enough to float!
Aerogel is very light and mostly air, but very strong. You probably need to include some amount of gas to make aerogel, but if you could make it in a really low-pressure environment, and perhaps even with a light gas, like hydrogen, perhaps when exposed to normal atmospheric pressure it will retain its structure and be light enough to float!
Glass tree with butterflies
glass (or crystal) tree with silver-colored (or pure silver) butterflies in it that are designed so that their wings flap slowly in any light breeze.
Gold-colored butterflies might be another option.
Also, a silver tree with gold butterflies. (really needs fine detail in the tree)
glass (or crystal) tree with silver-colored (or pure silver) butterflies in it that are designed so that their wings flap slowly in any light breeze.
Gold-colored butterflies might be another option.
Also, a silver tree with gold butterflies. (really needs fine detail in the tree)
Tuesday, July 29, 2008
Cheap spectrometer
A cheap spectrometer can be made, within the visible light range, by simply having a prism and a 1-dimensional CCD array within an enclosed case. And the circuitry to read the CCD array. Lenses can be used to focus light onto a point in the prism; the wider the lens the less time it will take for a shot.
A cheap spectrometer can be made, within the visible light range, by simply having a prism and a 1-dimensional CCD array within an enclosed case. And the circuitry to read the CCD array. Lenses can be used to focus light onto a point in the prism; the wider the lens the less time it will take for a shot.
CRI > 100 windows
Color rendering index is defined as 100 for a blackbody radiation curve, but what if natural light isn't actually the best possible curve for distinguishing color? It could be something else, like for example, a flat envelope. What if we had windows that filter sunlight, and pass along white light of a lower intensity, but with a superior intensity curve for distinguishing color. While the overall intensity would be lower, the iris would adjust and therefore the eye could still see with better color distinction.
Color rendering index is defined as 100 for a blackbody radiation curve, but what if natural light isn't actually the best possible curve for distinguishing color? It could be something else, like for example, a flat envelope. What if we had windows that filter sunlight, and pass along white light of a lower intensity, but with a superior intensity curve for distinguishing color. While the overall intensity would be lower, the iris would adjust and therefore the eye could still see with better color distinction.
Digital Canvas
1024-level touch-sensitive canvas. You can use a brush on it. It's also an LCD screen. To use a brush you pick a color using another digital display. Can use various color models and an artist's palette where you mix colors using the brush.
The good thing about *this* canvas is that it has an undo.
Other possible options: Apply certain filters to the image or change its colors, use brush modes available in Photoshop, vector graphics module
Automatically save work every few seconds or as close to real-time as possible.
Can transfer paintings to computer via USB.
1024-level touch-sensitive canvas. You can use a brush on it. It's also an LCD screen. To use a brush you pick a color using another digital display. Can use various color models and an artist's palette where you mix colors using the brush.
The good thing about *this* canvas is that it has an undo.
Other possible options: Apply certain filters to the image or change its colors, use brush modes available in Photoshop, vector graphics module
Automatically save work every few seconds or as close to real-time as possible.
Can transfer paintings to computer via USB.
Card game: Swap
(lemme know if that name is taken.)
This game is similar to the Stack 'em game.
Use a shuffled deck.
The object is to fill up four stacks, one per suit, from ace to king, or 2 to ace if preferred but only if decided before the game starts.
You get 4 additional stacks that you can work with at your own will (they start out empty.) Cards are face-up. At any time you may :
a) take a card off the top of the deck and place onto any of these four stacks
b) take a card off of any of these stacks and put onto one of the four suit piles
c) take an entire stack and place it on top of another stack
You of course may also take a card off the top of the deck and place directly onto one of the four suit stacks.
Do this until you're stuck or you win.
Note that, unlike in the Stack 'em game, you *may* put a card of a higher value on top of a card of a lower value. Obviously in some cases this will get you stuck. Part of the game is figuring out when you can do this without causing a paradox, or just crossing your fingers and hoping that it doesn't..
Another difference is that in the Stack 'em game, since you can't place a higher card on a lower card, you can just keep the stacks directly vertical -- good for conserving space when you need to. In *this* version you'll need to look at the stack histories to know when you should or shouldn't put a higher card on top of a lower card, so you may want to keep the stacks (or parts of the stacks) cascaded.
(lemme know if that name is taken.)
This game is similar to the Stack 'em game.
Use a shuffled deck.
The object is to fill up four stacks, one per suit, from ace to king, or 2 to ace if preferred but only if decided before the game starts.
You get 4 additional stacks that you can work with at your own will (they start out empty.) Cards are face-up. At any time you may :
a) take a card off the top of the deck and place onto any of these four stacks
b) take a card off of any of these stacks and put onto one of the four suit piles
c) take an entire stack and place it on top of another stack
You of course may also take a card off the top of the deck and place directly onto one of the four suit stacks.
Do this until you're stuck or you win.
Note that, unlike in the Stack 'em game, you *may* put a card of a higher value on top of a card of a lower value. Obviously in some cases this will get you stuck. Part of the game is figuring out when you can do this without causing a paradox, or just crossing your fingers and hoping that it doesn't..
Another difference is that in the Stack 'em game, since you can't place a higher card on a lower card, you can just keep the stacks directly vertical -- good for conserving space when you need to. In *this* version you'll need to look at the stack histories to know when you should or shouldn't put a higher card on top of a lower card, so you may want to keep the stacks (or parts of the stacks) cascaded.
Card game: Grid
Getting a feel for the strategy of this game probably requires following the instructions and playing it!
Any number of players can play, although it probably becomes pointless with too many players (especially for variant 1).
This game has two variants.
Variant 1
Use a shuffled deck.
Place 8 cards, face up, on the table in a pattern like this:
AAA
A A
AAA
The eight card positions are actually eight potential stacks.
Give each player one card, face-up. These are their personal stacks. The player with the most cards in their stack at the end of the game wins.
Place the rest of the deck, face-down, in the middle position.
Players take turns around the table. On a turn you first fill any missing stacks out of the eight with cards from the deck (one card per empty stack). then you do *one* of the following:
a) place one card/stack on another card/stack, as long as the top cards of the two stacks are either the same suit or the same value. repeat as desired
b) take exactly one stack and place it on your personal stack. You may only do this if the top card on your personal stack is the same suit or value as the top card on said stack.
You may not pass. (you must either take a stack or place at least one stack/card atop another, unless no move is possible)
Variant 2
Just like variant 1 except that you can't "repeat as desired". you either do a) twice, or b). you cannot pass. if you do a) and doing it only once is possible, you do it once.
Getting a feel for the strategy of this game probably requires following the instructions and playing it!
Any number of players can play, although it probably becomes pointless with too many players (especially for variant 1).
This game has two variants.
Variant 1
Use a shuffled deck.
Place 8 cards, face up, on the table in a pattern like this:
AAA
A A
AAA
The eight card positions are actually eight potential stacks.
Give each player one card, face-up. These are their personal stacks. The player with the most cards in their stack at the end of the game wins.
Place the rest of the deck, face-down, in the middle position.
Players take turns around the table. On a turn you first fill any missing stacks out of the eight with cards from the deck (one card per empty stack). then you do *one* of the following:
a) place one card/stack on another card/stack, as long as the top cards of the two stacks are either the same suit or the same value. repeat as desired
b) take exactly one stack and place it on your personal stack. You may only do this if the top card on your personal stack is the same suit or value as the top card on said stack.
You may not pass. (you must either take a stack or place at least one stack/card atop another, unless no move is possible)
Variant 2
Just like variant 1 except that you can't "repeat as desired". you either do a) twice, or b). you cannot pass. if you do a) and doing it only once is possible, you do it once.
Solitaire game: Stack 'Em
Start with a shuffled deck.
The object is to stack from ace to king (or 2 to ace if preferred, as long as it's decided beforehand) in each of four stacks, one stack per suit.
You get two stacks of your own to work with (they start with no cards in them.). Cards are face-up. The rules for these two stacks are:
a) you can place a card on a stack only if it's of an equal or lower face value than the previous top card
b) you can take a card off of either stack at any time to place onto one of the four suit stacks.
You get 3 or fewer cards in your hand at any one time. You can take a card out of your hand and put it on a stack at any time.
(you may not take a card from any of the stacks and put it in your hand.)
Take a card from the top of the deck at any time and either place in your hand or onto any of the stacks if possible.
Do this process until you're stuck or you win.
Obviously, variants of this game can be created to make it easier, such as having three stacks and/or five cards in your hand, but I find that with a little bit of practice you can win with the above rules half or most of the time. I'd recommend the challenge.
Start with a shuffled deck.
The object is to stack from ace to king (or 2 to ace if preferred, as long as it's decided beforehand) in each of four stacks, one stack per suit.
You get two stacks of your own to work with (they start with no cards in them.). Cards are face-up. The rules for these two stacks are:
a) you can place a card on a stack only if it's of an equal or lower face value than the previous top card
b) you can take a card off of either stack at any time to place onto one of the four suit stacks.
You get 3 or fewer cards in your hand at any one time. You can take a card out of your hand and put it on a stack at any time.
(you may not take a card from any of the stacks and put it in your hand.)
Take a card from the top of the deck at any time and either place in your hand or onto any of the stacks if possible.
Do this process until you're stuck or you win.
Obviously, variants of this game can be created to make it easier, such as having three stacks and/or five cards in your hand, but I find that with a little bit of practice you can win with the above rules half or most of the time. I'd recommend the challenge.
Idea for a novel
According to Moore's law, the number of transistors in CPUs doubles every 18 months. Personal computers, to say nothing of supercomputers, have gone from 92000 instructions per second with 4736 bytes of random-access memory in 1971, to 59455000000 instructions per second with 4294967296 of bytes of RAM in 2008 -- practically a million-fold increase in computing power within a single lifetime. This story is set in the future - a prominent entertainment company has simulated the human brain and basic sensory functions, over 10^10^11 trials in a simulated theatre environment, playing every possible movie from lengths of 90 to 180 minutes. The goal is to get as much money as possible, by making the most "successful" movie possible, without even hiring a single actor, director or cinematographer. Success is measured by the stimulation of certain pleasure circuits in the brain. Brains of corpses ranging in age from 17 to 59 were topographically and chemically scanned to give the simulators starting conditions encompassing memories, associations, and personality. The resulting movie was more than a hit. People /would not quit watching it/. They would go back and back in in an anxious frenzy, emptying their wallets and sometimes pawning off personal possessions to see the movie. People who had illegally obtained the movie for home viewing would become sickly and would not even take the time to eat. In both camps people were showing signs of addiction and even mild psychoses. Experts - the ones who were not too busy watching the movie - were afraid the society itself was going to collapse if something weren't done soon. The economy was already half way to being put on hold. Many theatres refused to continue showing the movie. The company was urged to pull the movie from all theatres, but they would not. And then..
According to Moore's law, the number of transistors in CPUs doubles every 18 months. Personal computers, to say nothing of supercomputers, have gone from 92000 instructions per second with 4736 bytes of random-access memory in 1971, to 59455000000 instructions per second with 4294967296 of bytes of RAM in 2008 -- practically a million-fold increase in computing power within a single lifetime. This story is set in the future - a prominent entertainment company has simulated the human brain and basic sensory functions, over 10^10^11 trials in a simulated theatre environment, playing every possible movie from lengths of 90 to 180 minutes. The goal is to get as much money as possible, by making the most "successful" movie possible, without even hiring a single actor, director or cinematographer. Success is measured by the stimulation of certain pleasure circuits in the brain. Brains of corpses ranging in age from 17 to 59 were topographically and chemically scanned to give the simulators starting conditions encompassing memories, associations, and personality. The resulting movie was more than a hit. People /would not quit watching it/. They would go back and back in in an anxious frenzy, emptying their wallets and sometimes pawning off personal possessions to see the movie. People who had illegally obtained the movie for home viewing would become sickly and would not even take the time to eat. In both camps people were showing signs of addiction and even mild psychoses. Experts - the ones who were not too busy watching the movie - were afraid the society itself was going to collapse if something weren't done soon. The economy was already half way to being put on hold. Many theatres refused to continue showing the movie. The company was urged to pull the movie from all theatres, but they would not. And then..
An SLM color space
AFAIK, every existing color space has its strengths and weaknesses, with respect to how much of the color space of what the human eye can actually distinguish it can represent. And translating from one color space to another other requires an algorithm for each combination of respective color spaces, or an intermediate color space which means *two* steps where loss of information can occur.
The solution:
The human eye physically perceives color in three dimensions: 420-nm resonance, 564-nm resonance, and 534-nm resonance. My proposed color space would simply store a color in terms of how much it excites those three respective cone cell types, thus covering human vision's entire color space (sorry to the tetrachromats) in only three values (just like HSB, Lab and RGB). I believe that the reason this color space hasn't been used thus far is that you can't use it to directly reproduce (display) a given color--i.e., e.g., if you shined a 564-nm light (even a monochromatic one) at the given intensity, it would also excite the 534-nm receptors to some degree. However, given a specific display device's color space, you could translate a color from this color space to that one for displaying or otherwise format converting. Thus this color space could be used as an intermediate/universal color space for when translating between color spaces or for digital photographic manipulations. AFAIK, even Lab and HSB aren't used directly for displays or pigment combinations anyway.
Also, digital cameras/video cameras should be made with sensors of these respective resonant frequencies instead of the typical red, blue and green and should save in that format. (You may not be able to *display* colors in SLM primaries, but if the eye can perceive in those resonant frequecies, a camera should be able to too.) This would capture images in a more true-to-form way which means images could be represented optimally on *any* display device/printer and also photo manipulation algorithms would incur less loss of information. Of course, the camera's PC software should be able to transparently convert those files to the currently widely used standards, or the camera itself should optionally save in either/both formats. This may require special filters that have spectral envelopes that reflect those of cone type sensibilities.
AFAIK, every existing color space has its strengths and weaknesses, with respect to how much of the color space of what the human eye can actually distinguish it can represent. And translating from one color space to another other requires an algorithm for each combination of respective color spaces, or an intermediate color space which means *two* steps where loss of information can occur.
The solution:
The human eye physically perceives color in three dimensions: 420-nm resonance, 564-nm resonance, and 534-nm resonance. My proposed color space would simply store a color in terms of how much it excites those three respective cone cell types, thus covering human vision's entire color space (sorry to the tetrachromats) in only three values (just like HSB, Lab and RGB). I believe that the reason this color space hasn't been used thus far is that you can't use it to directly reproduce (display) a given color--i.e., e.g., if you shined a 564-nm light (even a monochromatic one) at the given intensity, it would also excite the 534-nm receptors to some degree. However, given a specific display device's color space, you could translate a color from this color space to that one for displaying or otherwise format converting. Thus this color space could be used as an intermediate/universal color space for when translating between color spaces or for digital photographic manipulations. AFAIK, even Lab and HSB aren't used directly for displays or pigment combinations anyway.
Also, digital cameras/video cameras should be made with sensors of these respective resonant frequencies instead of the typical red, blue and green and should save in that format. (You may not be able to *display* colors in SLM primaries, but if the eye can perceive in those resonant frequecies, a camera should be able to too.) This would capture images in a more true-to-form way which means images could be represented optimally on *any* display device/printer and also photo manipulation algorithms would incur less loss of information. Of course, the camera's PC software should be able to transparently convert those files to the currently widely used standards, or the camera itself should optionally save in either/both formats. This may require special filters that have spectral envelopes that reflect those of cone type sensibilities.
Labels:
CMY,
color model,
color space,
cone types,
HSB,
HSV,
idea,
LaB,
RGB,
RYB
What is the timbre of the Voice of Humanity?
Take thousands of hours of audio which is just voices, accurately representing proportions of men, women, children, races, etc., break it into sections of, say, 10 seconds, and superimpose *all* of them, i.e., add them *all* up to one 10-second clip. May have to do this using a large bit depth and then normalizing to fit into 16-bit samples and/or using compression.
Alternatively get as many possible voices as you can but that are just a sustained vocal sound (like 'carrying a note'), without phonemes.
Take thousands of hours of audio which is just voices, accurately representing proportions of men, women, children, races, etc., break it into sections of, say, 10 seconds, and superimpose *all* of them, i.e., add them *all* up to one 10-second clip. May have to do this using a large bit depth and then normalizing to fit into 16-bit samples and/or using compression.
Alternatively get as many possible voices as you can but that are just a sustained vocal sound (like 'carrying a note'), without phonemes.
Pi is NOT infinite! (and neither is the mandelbrot)
Is pi infinite? If it is, is it special in this regard? There are literally countless formulae you could use to construct "infinite" numbers, but what does it mean? Surely there is no more information there than is contained by the algorithm for generating it (this applies to fractals too). It's when you try to express it as decimal that you go into an infinite loop.
Here's the idea: Develop a theory of how hard it is to construct patterns that never repeat and in what ways you can do this, in order to gain insight into the supposed infinity of pi. This would *not* be a theory pertaining specifically to generating numbers that expand to infinite places; that would defeat the purpose because irrational numbers are already explored and also the conversion to decimal (or any other kind of number) merely complicates the reasoning behind pi supposedly being "infinite." It's to be merely a theory about the ease or difficulty of constructing infinite, non-repeating patterns in general.
Is pi infinite? If it is, is it special in this regard? There are literally countless formulae you could use to construct "infinite" numbers, but what does it mean? Surely there is no more information there than is contained by the algorithm for generating it (this applies to fractals too). It's when you try to express it as decimal that you go into an infinite loop.
Here's the idea: Develop a theory of how hard it is to construct patterns that never repeat and in what ways you can do this, in order to gain insight into the supposed infinity of pi. This would *not* be a theory pertaining specifically to generating numbers that expand to infinite places; that would defeat the purpose because irrational numbers are already explored and also the conversion to decimal (or any other kind of number) merely complicates the reasoning behind pi supposedly being "infinite." It's to be merely a theory about the ease or difficulty of constructing infinite, non-repeating patterns in general.
How the pyramids could have been made
1. Tie a rope to a block.
2. Run the rope all the way over the top of the pyramid and down the other side and onto the ground.
3. With many people on the ground, pull the block up the pyramid from the other side of the pyramid.
4. Probably have a few people walking up with the block (if it's too steep they could tie themselves to the rope) in order to a) make it slide easily by using rollers, and b) guide it to the right place and/or set it in position when it gets there.
1. Tie a rope to a block.
2. Run the rope all the way over the top of the pyramid and down the other side and onto the ground.
3. With many people on the ground, pull the block up the pyramid from the other side of the pyramid.
4. Probably have a few people walking up with the block (if it's too steep they could tie themselves to the rope) in order to a) make it slide easily by using rollers, and b) guide it to the right place and/or set it in position when it gets there.
Different kind of musical k/b
Instead of just one row of boring old keys, have multiple rows of keys - but not like the k/b's that already have multiple rows: each key would merely be a square, and the rows merely comprise a matrix of squares - a lot simpler. If it's a midi k/b then, of course, extra rows can be used for anything, such as other instruments, sound modifiers, or different octaves.
The question is still raised of how to handle black vs. white keys. One way is simply to have black columns in the matrix which are thinner than the white columns. Another way is to forgo the black vs. white distinction altogether and simply have 12 equally positioned keys per octave.
While we're on the topic, an even more advanced k/b would just have a big touchscreen and imagination would be the limit as to how it might interface with the user.
Instead of just one row of boring old keys, have multiple rows of keys - but not like the k/b's that already have multiple rows: each key would merely be a square, and the rows merely comprise a matrix of squares - a lot simpler. If it's a midi k/b then, of course, extra rows can be used for anything, such as other instruments, sound modifiers, or different octaves.
The question is still raised of how to handle black vs. white keys. One way is simply to have black columns in the matrix which are thinner than the white columns. Another way is to forgo the black vs. white distinction altogether and simply have 12 equally positioned keys per octave.
While we're on the topic, an even more advanced k/b would just have a big touchscreen and imagination would be the limit as to how it might interface with the user.
Man Overboard!
The idea is a setup that cruise liners and other large ships can use to retrieve someone should they fall overboard. Just recently someone fell overboard a military ship and they never could find him. The setup would be one of two things:
a) A video camera combined with software that runs on the ship's computers to analyze ripples in the water to determine any local sources of ripples, even small ripples. That way you can just point the video camera down at the water - anywhere, or at least the side they fell off from - and determine their location. The question with this option is how much a suitable camera would cost.
b) A transversely flexible matrix of accelerometer points attached to tiny floats that you lay on top of the water, which relays the information to software to map the waves and determine the person's location in much the same manner as above but just using a more direct means of recording ripples. *Might* be cheaper and/or more effective than the first option. Actually, forget the accelerometers: since they're all connected to each other, they can simply measure their connective arm pitch changes with potentiometers.
Perhaps people don't fall off and get lost that often, but hey, if cruise liners start implementing this system then people are going to start asking, "does this cruise liner have the Man Overboard system installed?"
The idea is a setup that cruise liners and other large ships can use to retrieve someone should they fall overboard. Just recently someone fell overboard a military ship and they never could find him. The setup would be one of two things:
a) A video camera combined with software that runs on the ship's computers to analyze ripples in the water to determine any local sources of ripples, even small ripples. That way you can just point the video camera down at the water - anywhere, or at least the side they fell off from - and determine their location. The question with this option is how much a suitable camera would cost.
b) A transversely flexible matrix of accelerometer points attached to tiny floats that you lay on top of the water, which relays the information to software to map the waves and determine the person's location in much the same manner as above but just using a more direct means of recording ripples. *Might* be cheaper and/or more effective than the first option. Actually, forget the accelerometers: since they're all connected to each other, they can simply measure their connective arm pitch changes with potentiometers.
Perhaps people don't fall off and get lost that often, but hey, if cruise liners start implementing this system then people are going to start asking, "does this cruise liner have the Man Overboard system installed?"
In Soviet Russia, starving artist pays YOU!
What if you had a piece of artwork that you really wanted to be seen/heard, but people just didn't see its value, and/or you didn't have the notoriety you'd like? And what if you were RICH or reeeally passionate about your art and had some cash to spare? The idea here is for exhibitions where *artists* pay *you* to see their art. That way people with a very important message can get their message out, and it gives couch potatoes an excuse to go out and get some culture.
Donations can also be factored into this.
What if you had a piece of artwork that you really wanted to be seen/heard, but people just didn't see its value, and/or you didn't have the notoriety you'd like? And what if you were RICH or reeeally passionate about your art and had some cash to spare? The idea here is for exhibitions where *artists* pay *you* to see their art. That way people with a very important message can get their message out, and it gives couch potatoes an excuse to go out and get some culture.
Donations can also be factored into this.
I decided here is where I'll post my ideas -- a few of them.
--
Recycling Supercenter
A vast central repository where everyone can send their old, broken items. They're all sorted by model and categorized. When enough items of the same model (or in some cases, different but similar models) are available to make a working object out of them, an employee does that. Then it is sold as refurbished or in a thrift store, etc.
And yes, I do mean VAST. There are hundreds of thousands, if not millions, of models out there. The repository could be broken up by type of object into different stations around the country, but that wouldn't serve a purpose since it would require the same airfare anyway--as there would be only one place to send an object of its given type.
Or perhaps it could be broken up into multiple locations so that the same object types are repeated, but only for certain models or sets of similar models--the ones that are popular enough that the process wouldn't be thwarted by having redundant
locations.
Or just have it one large repository, preferably in the center of population density.
--
Recycling Supercenter
A vast central repository where everyone can send their old, broken items. They're all sorted by model and categorized. When enough items of the same model (or in some cases, different but similar models) are available to make a working object out of them, an employee does that. Then it is sold as refurbished or in a thrift store, etc.
And yes, I do mean VAST. There are hundreds of thousands, if not millions, of models out there. The repository could be broken up by type of object into different stations around the country, but that wouldn't serve a purpose since it would require the same airfare anyway--as there would be only one place to send an object of its given type.
Or perhaps it could be broken up into multiple locations so that the same object types are repeated, but only for certain models or sets of similar models--the ones that are popular enough that the process wouldn't be thwarted by having redundant
locations.
Or just have it one large repository, preferably in the center of population density.
Labels:
conservation,
eco-friendly,
economy,
environment,
idea,
recycle,
refurbish,
reuse,
thrift
Tuesday, June 03, 2008
Toward Reasonable Light Sources (not an idea)
Is it just me, or is the yellow-orange tint of in-door incandescent lighting totally depressing? It's like one step away from hell (which would, itself, be red). For most of my life I must have denied it - it was hard to notice because the eye adjusts anyway, and when I did, I must have figured, maybe it was subjective. But now I've come to realize that the light simply is inferior. It IS yellow. It's not white; it's yellow. I don't know about you, but I find this unacceptable.
Light from the sun, or any other source that emits light by being heated up, follows a spectral envelope (that is, an intensity vs. frequency graph) called a blackbody spectrum. (The very existence of this curve directly led to the invention of quantum mechanics.) The shape of the curve depends on (and ONLY on) the temperature to which the body is being heated up. The average temperature of the surface of the sun (the photosphere) is 5780° Kelvin, hence the "color temperature" (the blackbody spectrum for a given temperature) of direct sunlight is 5780K. That basically defines what we see as white. The color temperature of an incandescent bulb is 2600K to 3300K. That's this color. Halogen and typical fluorescents are higher, but not by a whole lot.
Fortunately there now exist fluorescent bulbs that emulate the color temperature of sunlight, or even overcast (6500K). No other common household lighting will do that, but fluorescents aren't perfect either: I say "emulate" because the spectrum doesn't actually follow a blackbody spectrum. They use photoluminescence, a form of cold body radiation. That means, since the spectral output doesn't naturally adhere to a blackbody curve, they have to simulate it, by combining multiple colors. The light itself may *appear* white (just as white as the sun), but colors that it reflects off of will render differently; they will have less variation between them. This applies to any fluorescent lighting - it's unnatural light. Hence a 5000K fluorescent light isn't *really* 5000K; 5000K is the Correlated Color Temperature (CCT), which basically just means that it's no bluer or redder than white light.
If you're wondering why two light sources that appear equal in color could render reflected colors differently, it's because the eye doesn't see colors according to their full spectral envelope. If it did, the experience would be unimaginable. The eye perceives three dimensions of color; there are three specific cell types that resonate with three different wavelengths of light. They don't resonate ONLY with those wavelengths, but generally speaking, the closer a frequency is to a cell type's resonate frequency, the more that cell type will respond to it. Of course light is (usually) composed of many frequencies; they all add up and result in the three respective stimulus levels for any particular point of color. So the same-looking 'white' can be made using an unlimited number of different combinations of frequencies. It can be 5000K blackbody radiation, for example (emitting light on all visible frequencies), or it can be just as few as two different frequencies: blue and yellow. But in addition to that, reflected colors have their own frequency distributions. Cyan, for example, could actually be reflecting cyan, or it could only be reflecting a combination of blue and green. Both of these look the same to the eye, at least under a given light. But what if our light source were (for example) a combination of monochromatic red, blue and green, and the color reflected only cyan? Then you wouldn't even see it, because cyan is none of those. So between the spectral envelope of the light source, the spectral envelope of the reflective surface, and dimensionality of color perception, you get the same color looking differently under different lights, even if the lights themselves appear the same. Full-spectrum light is just better for rendering color.
Some stores purposely use fluorescent light to, for example, make their tomatoes appear redder. (Some colors may appear more vivid; the only thing that's lessened is the differentiability between colors.) While it may look good for their tomatoes, I don't think it's overall a good thing. The environment just appears sterile and it has a deadening effect on the feelings.
There is a measure of color-rendering ability, called the Color Rendering Index (CRI), used to guage various light sources. A true blackbody profile, which includes incandescent lights, has a CRI of 100 ("perfect"). Note that CRI is independent of color temperature or CCT. A fluorescent bulb can have a CRI as low as 49, although they've gotten better over the years. 85 and above is considered decent; 90 and above is considered excellent. A fluorescent bulb with a CCT of 5000K to 6500K and a CRI of 90 or above is technically considered a "full-spectrum" light source, but I'm not so sure I agree. What I have now are 2 fluorescent bulbs with a CCT of 5000K and a CRI of 92, called Homelight Natural Sunshine, by Philips. The color temperature is fine - it really does look like daylight, but the color rendering just seems off to me. It doesn't seem natural. It could just be my imagination, but I don't think so. And the CRI is an out-dated measurement. It uses an inferior color map (which is a way of mapping the entire spectrum of perceivable colors using a number, usually 3, of particular dimensions), and there are better methods available. It's considered not very good for visual assessment, but light manufacturers still use the CRI, so that's what we have to go on.
So, I plan to get a better light light source than fluorescent, one of these days. The only better light source I KNOW OF is the xenon arc lamp, which seems to have - depending on whom you ask - a color temperature of 4500K (still higher than halogen or typical fluorescents) to 6500K and a CRI of 95 to 99+. But xenon arc lamps (and their power sources) are not cheap, nor even easy to attain for that matter. Also they take about a minute to warm up, as far as I can ascertain. White LEDs, by the way, can also render light at 5000K and above (and they last longer), but they have even lower CRIs than fluorescent. (They use phosphors, just like fluorescents do.. there's no such thing as a white-light-emitting diode.)
But they're also working on something called a Quantum Dot White LED (QD-WLED). An LED outputs light in very narrow frequency curve, according to the size of its "band gap", and different band gap sizes have hitherto been discovered only with the development/use of different materials. Quantum dots, on the other hand, allow light emission of arbitrary colors, with the same given material, depending on the sizes of the dots (using many different dot sizes on one QD-LED). This could lead to white LEDs that are extremely close to a natural light curve. Guess I'll just have to wait and "see."
Is it just me, or is the yellow-orange tint of in-door incandescent lighting totally depressing? It's like one step away from hell (which would, itself, be red). For most of my life I must have denied it - it was hard to notice because the eye adjusts anyway, and when I did, I must have figured, maybe it was subjective. But now I've come to realize that the light simply is inferior. It IS yellow. It's not white; it's yellow. I don't know about you, but I find this unacceptable.
Light from the sun, or any other source that emits light by being heated up, follows a spectral envelope (that is, an intensity vs. frequency graph) called a blackbody spectrum. (The very existence of this curve directly led to the invention of quantum mechanics.) The shape of the curve depends on (and ONLY on) the temperature to which the body is being heated up. The average temperature of the surface of the sun (the photosphere) is 5780° Kelvin, hence the "color temperature" (the blackbody spectrum for a given temperature) of direct sunlight is 5780K. That basically defines what we see as white. The color temperature of an incandescent bulb is 2600K to 3300K. That's this color. Halogen and typical fluorescents are higher, but not by a whole lot.
Fortunately there now exist fluorescent bulbs that emulate the color temperature of sunlight, or even overcast (6500K). No other common household lighting will do that, but fluorescents aren't perfect either: I say "emulate" because the spectrum doesn't actually follow a blackbody spectrum. They use photoluminescence, a form of cold body radiation. That means, since the spectral output doesn't naturally adhere to a blackbody curve, they have to simulate it, by combining multiple colors. The light itself may *appear* white (just as white as the sun), but colors that it reflects off of will render differently; they will have less variation between them. This applies to any fluorescent lighting - it's unnatural light. Hence a 5000K fluorescent light isn't *really* 5000K; 5000K is the Correlated Color Temperature (CCT), which basically just means that it's no bluer or redder than white light.
If you're wondering why two light sources that appear equal in color could render reflected colors differently, it's because the eye doesn't see colors according to their full spectral envelope. If it did, the experience would be unimaginable. The eye perceives three dimensions of color; there are three specific cell types that resonate with three different wavelengths of light. They don't resonate ONLY with those wavelengths, but generally speaking, the closer a frequency is to a cell type's resonate frequency, the more that cell type will respond to it. Of course light is (usually) composed of many frequencies; they all add up and result in the three respective stimulus levels for any particular point of color. So the same-looking 'white' can be made using an unlimited number of different combinations of frequencies. It can be 5000K blackbody radiation, for example (emitting light on all visible frequencies), or it can be just as few as two different frequencies: blue and yellow. But in addition to that, reflected colors have their own frequency distributions. Cyan, for example, could actually be reflecting cyan, or it could only be reflecting a combination of blue and green. Both of these look the same to the eye, at least under a given light. But what if our light source were (for example) a combination of monochromatic red, blue and green, and the color reflected only cyan? Then you wouldn't even see it, because cyan is none of those. So between the spectral envelope of the light source, the spectral envelope of the reflective surface, and dimensionality of color perception, you get the same color looking differently under different lights, even if the lights themselves appear the same. Full-spectrum light is just better for rendering color.
Some stores purposely use fluorescent light to, for example, make their tomatoes appear redder. (Some colors may appear more vivid; the only thing that's lessened is the differentiability between colors.) While it may look good for their tomatoes, I don't think it's overall a good thing. The environment just appears sterile and it has a deadening effect on the feelings.
There is a measure of color-rendering ability, called the Color Rendering Index (CRI), used to guage various light sources. A true blackbody profile, which includes incandescent lights, has a CRI of 100 ("perfect"). Note that CRI is independent of color temperature or CCT. A fluorescent bulb can have a CRI as low as 49, although they've gotten better over the years. 85 and above is considered decent; 90 and above is considered excellent. A fluorescent bulb with a CCT of 5000K to 6500K and a CRI of 90 or above is technically considered a "full-spectrum" light source, but I'm not so sure I agree. What I have now are 2 fluorescent bulbs with a CCT of 5000K and a CRI of 92, called Homelight Natural Sunshine, by Philips. The color temperature is fine - it really does look like daylight, but the color rendering just seems off to me. It doesn't seem natural. It could just be my imagination, but I don't think so. And the CRI is an out-dated measurement. It uses an inferior color map (which is a way of mapping the entire spectrum of perceivable colors using a number, usually 3, of particular dimensions), and there are better methods available. It's considered not very good for visual assessment, but light manufacturers still use the CRI, so that's what we have to go on.
So, I plan to get a better light light source than fluorescent, one of these days. The only better light source I KNOW OF is the xenon arc lamp, which seems to have - depending on whom you ask - a color temperature of 4500K (still higher than halogen or typical fluorescents) to 6500K and a CRI of 95 to 99+. But xenon arc lamps (and their power sources) are not cheap, nor even easy to attain for that matter. Also they take about a minute to warm up, as far as I can ascertain. White LEDs, by the way, can also render light at 5000K and above (and they last longer), but they have even lower CRIs than fluorescent. (They use phosphors, just like fluorescents do.. there's no such thing as a white-light-emitting diode.)
But they're also working on something called a Quantum Dot White LED (QD-WLED). An LED outputs light in very narrow frequency curve, according to the size of its "band gap", and different band gap sizes have hitherto been discovered only with the development/use of different materials. Quantum dots, on the other hand, allow light emission of arbitrary colors, with the same given material, depending on the sizes of the dots (using many different dot sizes on one QD-LED). This could lead to white LEDs that are extremely close to a natural light curve. Guess I'll just have to wait and "see."
Subscribe to:
Posts (Atom)