<html><body>If only the outer container would auto-adjust its height (or width) according to the max height or width of its inner containers, alignment wouldn't be such a pita.
<div>
<div style="height: 100%; background-color: red; float:left">
this<br/>
is<br/>
a<br/>
test
</div>
<div style="background-color: blue; float:left">
hi
</div>
<div style="height: 100%; background-color: green; float:left">
foobar
</div>
</div>
</body></html>
Monday, March 30, 2009
So I just tried this experiment
Friday, March 27, 2009
CSS Ideas
So I'm working with CSS, and for once I'm going to try to separate content from presentation from the outset, since that's such an in thing to do these days.
Immediately a pretty big limitation of CSS came to mind in regards to doing that, exactly what they're always beating you over the head with that you're suppposed to be using CSS for. I've already written a little bit about how CSS should have been different before, but here are some of the thoughts that came to mind now and some that have been brewing for a couple of weeks, ensuing my more recent work with Ecography.com..
So what tags do we have for containing content, other than div? Span would be nice, except that it has strange limitations: it can't contain certain other HTML tags, and apparently it can't even contain line breaks. So we just have div, and then whenever we don't want a damn margin, because all we want to do is demarcate data, we have to write CSS code to specify that. Divs also by default come below each other in sequence, unlike normal information which flows left to right, so that's another thing you sometimes have go out of your way to change in CSS.
So clearly we're missing something here, something in between div and span, or at least a span that likes people.
I've also seen ul's used with CSS, for menus. You have to modify them extensively to get rid of the bullets and margins, maybe make them go left-to-right, etc. It seems like they're being used in cases like these for something other than their intended purpose, and I tried doing the same thing in some other way than using ul's, and it didn't work. So that implies that ul's are being used to fill in for some other shortcoming.
What I really was thinking right now, though, is that the degree of separation of content from separation in CSS is very limited. Basically you can specify sequences of information-elements and in specific hierarchies, and then display those elements and hierarchies in any way you want, but if the way you want to display it involves changing the order of the information (without using absolute positioning) that's a no-go. That can only be done by editing the HTML or the scripting code. Since a CSS file is designed to define only attributes of classes/tags on a syntactic level, there really isn't remotely a way for it to specify order of information. It would have to be changed dramatically.
To preserve backward compatibility, and its ease of specifying attributes, a new kind of syntactical entity should probably just be introduced that can specify sequences and hierarchies of information. The content itself, which would be found in the HTML code, would simply be referred to by id attributes. In the interest of the XML movement and consistency with HTML, the new syntax should probably be HTML-like, perhaps a subset of HTML. But then again in the interest of consistency/simplicity, why limit the HTML? Just make it full HTML, the only difference being the referring to of information by id attribute. And what if the HTML itself contains id attributes? Just allow it to refer to those too, I think..
But this just begs for a much simpler and somewhat more generalized and versatile solution: a) allow *all* HTML to convey other parts of the HTML, referred to by their id attributes, and b) allow HTML to include other HTML files. B) isn't so radical, since HTML can already include .css, .js, and image files. We're only making it more consistent here.
Now the only problem is that there's the possibility for circular references. It's only a philosophical problem, though, since they can easily be detected and ignored. Meaning that the simple solution here is: Don't Code Circular References.
Admittingly, it is a little awkward to have to specify that the content be hidden so that it can be shown in the correct place. But there is a solution: relegate all such content to <content> tags, which are automatically hidden. Actually "content" is a rather ironic name for things that will be automatically hidden, but you get the picture: the tag can be called anything.
Another shortcoming, and this is really the one that bugs me most often, is alignment.
There's a *big* issue in CSS where all the CSS gurus yell DON'T USE TABLES, not for layout purposes. Now here's the rub: people use tables for a reason. It's *easy*. In HTML, and in almost every other area of computer technology and life itself, simple things should be able to be done simply. Doing the same things you can do with a table with CSS is a real P.I.T.A. It's not at all obvious how to do it (to understate the problem), it's not quick and easy, and even a highly paid web front-end guru admitted to me that CSS has shortcomings in this area, when I asked him if there should perhaps be a CSS-equivalent to tables.
Is there a way to create a feature something like tables, per se, for CSS? I don't know, because I don't know precisely what the contention is with using tables; i.e., in exactly what ways does using tables prevent one from customizing layout in CSS? Tables provide alignment. What do they disprovide, and would this same limitation apply to adding, per se, a table-like feature to CSS? How would we do that anyway, on a semantic level?
I think a more generalized/flexible solution would be to provide some kind of alignment tokenization. Like you insert a token A at point B, then at point C say align this edge or that edge of this element with token A. Since you don't really have a place *in* HTML *at* the right/bottom/etc edge of a layout element, then either Token A has to specify which edge of the padding, margin or border it's binding to, or Token A can simply bind to a layout element and Token B would specify which edge and such of Token A to align to.
It can be much simpler than that, though. You'll almost always want to align a left edge with a left edge, a right edge with a right edge, a border with a border, a content edge with a content edge, etc. This symmetry can be the only thing allowed, or it can be assumed by default allowing a simpler syntax in the most common usage, probably just by leaving out extra parameters: for example, you could align B's left border with "A" (implying A's left border), "A/content" (implying the left edge of A's content), "A/right" (implying A's right border), or "A/right/content".
There should probably be even further shortcuts: for aligning both the top and bottom with another element, both the left and the right, or the content, margin and border all at once. To align B's top and bottom with A's top and bottom, at all three box layers, we could just align "B/horiz" with "A". To align only the top and bottom borders, align "B/horiz/border" with "A".
Then of course, we could also use relative positioning via the normal syntax to shift Element B's left or right content, margin, or border left or right of A's, if we so wanted.
I've decided the only way to specify tokens we need is the id attribute. So A is called A because that element's id="A". This simplifies things.
Exactly how to specify the alignment remains in question. Associating somehow "B/left/border" with "A/margin" doesn't really fall into either CSS's or HTML's syntax. There are just no conventions for associating two relatively arbitrary values. We could just say, within B's definition,
And perhaps it should be. Then we could even then do this, for example:
Or perhaps it would be
Of course another idea would be to simply have a completely separate alignment table:
This idea so far can't do everything tables do. It can't size a bunch of cells (divs) according to the widest or highest automatic size. Do we *really* need to do that for content layout, though? (Actually, i think we do.) And what if we wanted to align two or more left or top margins where they would naturally go if it were one long margin? So in these cases we're not specifically aligning A after B, nor B after A, but we want those margins all in the same alignment class. What could we do? Probably the best solution is just to align them to an arbitrary common name, the same way in which we would otherwise use an id. This name would never be followed by /top, /horiz/content, etc., though, because that would be meaningless.
*Now* our alignment can do everything tables can do (I think?).
But this behavior should not be an automatic fall-back when no object happens to have the given name as its id, because it's doing a rather different thing. But instead of inventing new key words to align this way, we probably should just precede the name with a special symbol, like a %.
So I'm working with CSS, and for once I'm going to try to separate content from presentation from the outset, since that's such an in thing to do these days.
Immediately a pretty big limitation of CSS came to mind in regards to doing that, exactly what they're always beating you over the head with that you're suppposed to be using CSS for. I've already written a little bit about how CSS should have been different before, but here are some of the thoughts that came to mind now and some that have been brewing for a couple of weeks, ensuing my more recent work with Ecography.com..
So what tags do we have for containing content, other than div? Span would be nice, except that it has strange limitations: it can't contain certain other HTML tags, and apparently it can't even contain line breaks. So we just have div, and then whenever we don't want a damn margin, because all we want to do is demarcate data, we have to write CSS code to specify that. Divs also by default come below each other in sequence, unlike normal information which flows left to right, so that's another thing you sometimes have go out of your way to change in CSS.
So clearly we're missing something here, something in between div and span, or at least a span that likes people.
I've also seen ul's used with CSS, for menus. You have to modify them extensively to get rid of the bullets and margins, maybe make them go left-to-right, etc. It seems like they're being used in cases like these for something other than their intended purpose, and I tried doing the same thing in some other way than using ul's, and it didn't work. So that implies that ul's are being used to fill in for some other shortcoming.
What I really was thinking right now, though, is that the degree of separation of content from separation in CSS is very limited. Basically you can specify sequences of information-elements and in specific hierarchies, and then display those elements and hierarchies in any way you want, but if the way you want to display it involves changing the order of the information (without using absolute positioning) that's a no-go. That can only be done by editing the HTML or the scripting code. Since a CSS file is designed to define only attributes of classes/tags on a syntactic level, there really isn't remotely a way for it to specify order of information. It would have to be changed dramatically.
To preserve backward compatibility, and its ease of specifying attributes, a new kind of syntactical entity should probably just be introduced that can specify sequences and hierarchies of information. The content itself, which would be found in the HTML code, would simply be referred to by id attributes. In the interest of the XML movement and consistency with HTML, the new syntax should probably be HTML-like, perhaps a subset of HTML. But then again in the interest of consistency/simplicity, why limit the HTML? Just make it full HTML, the only difference being the referring to of information by id attribute. And what if the HTML itself contains id attributes? Just allow it to refer to those too, I think..
But this just begs for a much simpler and somewhat more generalized and versatile solution: a) allow *all* HTML to convey other parts of the HTML, referred to by their id attributes, and b) allow HTML to include other HTML files. B) isn't so radical, since HTML can already include .css, .js, and image files. We're only making it more consistent here.
Now the only problem is that there's the possibility for circular references. It's only a philosophical problem, though, since they can easily be detected and ignored. Meaning that the simple solution here is: Don't Code Circular References.
Admittingly, it is a little awkward to have to specify that the content be hidden so that it can be shown in the correct place. But there is a solution: relegate all such content to <content> tags, which are automatically hidden. Actually "content" is a rather ironic name for things that will be automatically hidden, but you get the picture: the tag can be called anything.
Another shortcoming, and this is really the one that bugs me most often, is alignment.
There's a *big* issue in CSS where all the CSS gurus yell DON'T USE TABLES, not for layout purposes. Now here's the rub: people use tables for a reason. It's *easy*. In HTML, and in almost every other area of computer technology and life itself, simple things should be able to be done simply. Doing the same things you can do with a table with CSS is a real P.I.T.A. It's not at all obvious how to do it (to understate the problem), it's not quick and easy, and even a highly paid web front-end guru admitted to me that CSS has shortcomings in this area, when I asked him if there should perhaps be a CSS-equivalent to tables.
Is there a way to create a feature something like tables, per se, for CSS? I don't know, because I don't know precisely what the contention is with using tables; i.e., in exactly what ways does using tables prevent one from customizing layout in CSS? Tables provide alignment. What do they disprovide, and would this same limitation apply to adding, per se, a table-like feature to CSS? How would we do that anyway, on a semantic level?
I think a more generalized/flexible solution would be to provide some kind of alignment tokenization. Like you insert a token A at point B, then at point C say align this edge or that edge of this element with token A. Since you don't really have a place *in* HTML *at* the right/bottom/etc edge of a layout element, then either Token A has to specify which edge of the padding, margin or border it's binding to, or Token A can simply bind to a layout element and Token B would specify which edge and such of Token A to align to.
It can be much simpler than that, though. You'll almost always want to align a left edge with a left edge, a right edge with a right edge, a border with a border, a content edge with a content edge, etc. This symmetry can be the only thing allowed, or it can be assumed by default allowing a simpler syntax in the most common usage, probably just by leaving out extra parameters: for example, you could align B's left border with "A" (implying A's left border), "A/content" (implying the left edge of A's content), "A/right" (implying A's right border), or "A/right/content".
There should probably be even further shortcuts: for aligning both the top and bottom with another element, both the left and the right, or the content, margin and border all at once. To align B's top and bottom with A's top and bottom, at all three box layers, we could just align "B/horiz" with "A". To align only the top and bottom borders, align "B/horiz/border" with "A".
Then of course, we could also use relative positioning via the normal syntax to shift Element B's left or right content, margin, or border left or right of A's, if we so wanted.
I've decided the only way to specify tokens we need is the id attribute. So A is called A because that element's id="A". This simplifies things.
Exactly how to specify the alignment remains in question. Associating somehow "B/left/border" with "A/margin" doesn't really fall into either CSS's or HTML's syntax. There are just no conventions for associating two relatively arbitrary values. We could just say, within B's definition,
align-top-border: A/bottom
, but then that would combinatorily create 12 to 24 new CSS key words. If CSS's syntax were a little more flexible, we could say align {top-border}: A/bottom
, or better, something like position: relative; top-border: {align: A/bottom; whatever: 2px}
(to align B's top border with A's bottom border and shift it down 2 pixels). And perhaps it should be. Then we could even then do this, for example:
top-margin: {position: absolute; whatever: 100px}
bottom-margin: {position: relative; whatever: 2px}
Or perhaps it would be
top-margin: {position: absolute; 100px};
bottom-margin: {position: relative; 2px}
Of course another idea would be to simply have a completely separate alignment table:
alignThat syntax would have to exist alongside one of the other formulations if at all, though, because it allows no cascading definitions, it's not object/class-oriented, and it can't be done in-line.
{
B/horiz/border: A/content;
C/horiz: D;
}
This idea so far can't do everything tables do. It can't size a bunch of cells (divs) according to the widest or highest automatic size. Do we *really* need to do that for content layout, though? (Actually, i think we do.) And what if we wanted to align two or more left or top margins where they would naturally go if it were one long margin? So in these cases we're not specifically aligning A after B, nor B after A, but we want those margins all in the same alignment class. What could we do? Probably the best solution is just to align them to an arbitrary common name, the same way in which we would otherwise use an id. This name would never be followed by /top, /horiz/content, etc., though, because that would be meaningless.
*Now* our alignment can do everything tables can do (I think?).
But this behavior should not be an automatic fall-back when no object happens to have the given name as its id, because it's doing a rather different thing. But instead of inventing new key words to align this way, we probably should just precede the name with a special symbol, like a %.
<div style="align-top: %hitop; align-bottom: %hibot; float:left">That was just an example to summarize and to show how simple it can all be, but it also raises a minor issue I hadn't thought of: how can we do a "horiz" or "vertical" align (implying left & right or top & bottom) with a %-preceded name? Well I can't think of a sound and consistent way, so we may have to do it just as coded above. We *could* just have the right or bottom margin be a second value that's used only for "horiz" or "vertical" alignments, while anything else referring to that name simply uses the top or the left value, and if only *one* element does a horiz or vertical align with it, but other elements do single aligns, then said element would only align its left or its top side. But that's if CSS has a "tolerant" coding philosophy, which I know browsers do but I don't know if the w3c does.
This div's<br/>
height adjusts<br/>
to the height<br/>
of its text.
</div>
<div style="align-top: %hitop; float:left">
This div is smaller.
</div>
<div style="align-top: %hitop; align-bottom: %hibot; float:left>
This div goes down just as far as the first div does.
</div>
Thursday, March 26, 2009
Wheels that Won't Kill You?
How to make car wheels NOT poppable?
First of all, unless you can find some utterly indestructible elastic material to contain it, don't fill them with any kind of liquid or gas.
So...
Some of these options could call for an inner core that rarely needs replaced, but an outer rubber covering that could be replaced as needed (for treading and other wear-down) by the auto mechanic.
How to make car wheels NOT poppable?
First of all, unless you can find some utterly indestructible elastic material to contain it, don't fill them with any kind of liquid or gas.
So...
Make them out of some sort of solid rubber?
would this cause too much energy loss or melting due to internal friction?
Just put a bunch of radially oriented springs inside?
would need multiple parts of springs with different tensile strengths to
absorb shocks/vibrations at various frequencies/amplitudes?
Just have a hollow metal drum or perhaps a solid really hard rubber, and perhaps
with a thin layer of (softer?) rubber outside, with a sensitive suspension that
absorbs all the small stuff?
Or, maybe they can be gas-filled, but compartmentalized, much like with the
hugely successful Titanic.
Fill them with some sort of rubber or otherwise elastic aerogel substance?
Use normal tire rubber, but organized in rather large 3-d cells? 'Rather large'
might be 1-8 cubic inches.. just enough to mitigate damages while the tire can
remain flexible, not melt due to internal friction, and not be too heavy
Some of these options could call for an inner core that rarely needs replaced, but an outer rubber covering that could be replaced as needed (for treading and other wear-down) by the auto mechanic.
Labels:
automobile safety,
automobiles,
cars,
idea,
safety,
tires
Just a Retarded Theory
Electrical transformers (such as wall adapters) are pretty clunky and inefficient and emit harmful EMF radiation. What if there's a better way?
You obviously can't just stick a component into the middle of the circuit that increases the voltage or the amperage. If it increases the voltage at the expense of amperage, where do all those extra electrons holes go? If it increases the amperage, where do all the extra electron holes come from? So obviously you need a completely separate closed circuit, somehow powered by the original, which is exactly what a transformer is but in an inefficient way. Is there a more direct way for Circuit A to pump Circuit B? Perhaps there is. Perhaps there are two.
In a battery voltage is multiplied by hooking them up in series, amperage by hooking them up in parallel. I would imagine this is true for charging, too, in an opposite sense. So..
a) charge the battery in series (this does not need a transformer; charge-pump capacitors, rectifiers, resistors, etc. can be used), while simultaneously draining the battery in parallel, or vice versa.
b) Don't optimize the chemicals for storage capacity; optimize them to efficiently immediately transfer ions or whatever on a constant basis.
..But wait! if we could just capture electrical energy for a second, say in the form of a static charge, in parallel, via Circuit A, and then release it in series, via Circuit B, or vice versa, then couldn't we have Circuit A pumping Circuit B with a V<->A conversion ratio? And wouldn't capacitors be exactly what we need to do this? Just use some more capacitors to alternate between charge and discharge, smooth out the output voltage, etc.
I don't know.. I never really understood electronics.
Electrical transformers (such as wall adapters) are pretty clunky and inefficient and emit harmful EMF radiation. What if there's a better way?
You obviously can't just stick a component into the middle of the circuit that increases the voltage or the amperage. If it increases the voltage at the expense of amperage, where do all those extra electrons holes go? If it increases the amperage, where do all the extra electron holes come from? So obviously you need a completely separate closed circuit, somehow powered by the original, which is exactly what a transformer is but in an inefficient way. Is there a more direct way for Circuit A to pump Circuit B? Perhaps there is. Perhaps there are two.
In a battery voltage is multiplied by hooking them up in series, amperage by hooking them up in parallel. I would imagine this is true for charging, too, in an opposite sense. So..
a) charge the battery in series (this does not need a transformer; charge-pump capacitors, rectifiers, resistors, etc. can be used), while simultaneously draining the battery in parallel, or vice versa.
b) Don't optimize the chemicals for storage capacity; optimize them to efficiently immediately transfer ions or whatever on a constant basis.
..But wait! if we could just capture electrical energy for a second, say in the form of a static charge, in parallel, via Circuit A, and then release it in series, via Circuit B, or vice versa, then couldn't we have Circuit A pumping Circuit B with a V<->A conversion ratio? And wouldn't capacitors be exactly what we need to do this? Just use some more capacitors to alternate between charge and discharge, smooth out the output voltage, etc.
I don't know.. I never really understood electronics.
Washer/Drier Combos
*Why* don't people sell washers and driers as one unit? Not one unit for washing, one unit for drying, stacked on top of each other, but one unit the size of one that does both?
That would *so* save resources, to say nothing of money and space.
It's not that difficult. You just need a mechanism for pumping hot air into that drum that is impervious to the water when it's washing. Here are some ways to do that:
(this is the style of washing machine that opens from the side.)
have two layers of drum. washing machines must have this anyway so that all the water can escape out the holes and be captured by the larger drum during the spin cycle.
Have a tube that delivers hot air come up and back down right over the top of the inner drum. hot air can push in through the holes in the drum. this can be optimized by making very loong (but thin) holes at that distance around the whole cylinder. the tube can have a valve on the end of it, which for simplicity's sake automatically opens whenever hot air coming out pushes it open; when it's closed it's water-tight. that way no water will get into the tube.
or
have this tube come into a back part that doesn't spin (like with a dryer), but right near the top. equal anti-water advantage, no inner drum to block it. as the back part would be part of the outer drum, the inner drum is free to turn without having a back to it. just put the inner drum pretty close to the back so clothes don't fall through.
for the outgoing air, just have a large intake portal in the top of the larger drum. water shouldn't get that high, but if it does, the whole passage from there to the outside can be water-compatible, so it would just end up coming out the outside vent. we don't really want water constantly splashing into it and drizzling out, though, so have that passageway go a few inches up over the top of the drum, before it goes back down. or, just put something under it like what goes over chimneys to keep the water out.
or
just have an electrically opened/closed flap for the outtake passage that's relatively water-tight.
this could also apply to the hot air opening, of course.
btw, the hot air pipe should also go up a little after the drum before coming back down, in case the valve fails so water doesn't splash into it.
don't worry about water overflowing into it because there can be protections for that
1) electronic sensing to stop water from filling if it overflows
2) a drain near the top of the drum, just like in a bathroom sink. the drain leads into the pipe that drains used wash water.
You don't really need a flap for the air outtake vent because it can be put above the level of the overflow drain and then also rise further up before it goes back down. the same pretty much applies to the hot air input passageway.
although you don't really want your outtake vent getting the hot air back that was just put in there because it's next to the hot air input. so you could
1) have the hot air input deflect the hot air to the right or the left like a jet--the same direction the clothes spin in, and have the air outtake vent receive air from the opposite direction
2) have the hot air input in the back of the drum and the hot air outtake vent near the front (by the door). also should try 1) in conjunction with this.
that sounded complex, it's all really very simple.
now you can program the washing/drying cycle as one sequence, controllable from one single interface.
-clothes don't get left in the washer undried
-users don't have to take the clothes out of one drum and put them into another
-obviously, saves double the space, money, resources, manufacturing pollution, transportation pollution, time and effort.
*Why* don't people sell washers and driers as one unit? Not one unit for washing, one unit for drying, stacked on top of each other, but one unit the size of one that does both?
That would *so* save resources, to say nothing of money and space.
It's not that difficult. You just need a mechanism for pumping hot air into that drum that is impervious to the water when it's washing. Here are some ways to do that:
(this is the style of washing machine that opens from the side.)
have two layers of drum. washing machines must have this anyway so that all the water can escape out the holes and be captured by the larger drum during the spin cycle.
Have a tube that delivers hot air come up and back down right over the top of the inner drum. hot air can push in through the holes in the drum. this can be optimized by making very loong (but thin) holes at that distance around the whole cylinder. the tube can have a valve on the end of it, which for simplicity's sake automatically opens whenever hot air coming out pushes it open; when it's closed it's water-tight. that way no water will get into the tube.
or
have this tube come into a back part that doesn't spin (like with a dryer), but right near the top. equal anti-water advantage, no inner drum to block it. as the back part would be part of the outer drum, the inner drum is free to turn without having a back to it. just put the inner drum pretty close to the back so clothes don't fall through.
for the outgoing air, just have a large intake portal in the top of the larger drum. water shouldn't get that high, but if it does, the whole passage from there to the outside can be water-compatible, so it would just end up coming out the outside vent. we don't really want water constantly splashing into it and drizzling out, though, so have that passageway go a few inches up over the top of the drum, before it goes back down. or, just put something under it like what goes over chimneys to keep the water out.
or
just have an electrically opened/closed flap for the outtake passage that's relatively water-tight.
this could also apply to the hot air opening, of course.
btw, the hot air pipe should also go up a little after the drum before coming back down, in case the valve fails so water doesn't splash into it.
don't worry about water overflowing into it because there can be protections for that
1) electronic sensing to stop water from filling if it overflows
2) a drain near the top of the drum, just like in a bathroom sink. the drain leads into the pipe that drains used wash water.
You don't really need a flap for the air outtake vent because it can be put above the level of the overflow drain and then also rise further up before it goes back down. the same pretty much applies to the hot air input passageway.
although you don't really want your outtake vent getting the hot air back that was just put in there because it's next to the hot air input. so you could
1) have the hot air input deflect the hot air to the right or the left like a jet--the same direction the clothes spin in, and have the air outtake vent receive air from the opposite direction
2) have the hot air input in the back of the drum and the hot air outtake vent near the front (by the door). also should try 1) in conjunction with this.
that sounded complex, it's all really very simple.
now you can program the washing/drying cycle as one sequence, controllable from one single interface.
-clothes don't get left in the washer undried
-users don't have to take the clothes out of one drum and put them into another
-obviously, saves double the space, money, resources, manufacturing pollution, transportation pollution, time and effort.
Car Impending Collision Detection
Cars should have RF transponders in them that communicate with all other cars near them instantaneously. GPS wouldn't be sufficent to gather each other's immediate positions and velocities, but radio triangulation might. This data can be used to avoid potential collisions: if two cars detect that they're on a collision path, they can together make an avoidance plan and execute it (with automatic steering and/or breaking). This must be last-second (calculated based on speed, traction, weight of car, direction, etc.) because sometimes cars can appear to be on a collision course but it's just due to the shape of a road, traffic lights, etc.
I don't think attenuation-based or pulse-based radio triangulation would work, but maybe phase-based triangulation could?
How to triangulate several cars at once, though?
--they can coordinate to time their output signals to not happen simultaneously
--there can be a whole matrix of tiny antennas and an algorithm sorts it all out.
--each car can operate on a different frequency. the circuitry then either separates the frequencies and then take the phase info (or does it have to use interferometry to get the phases shifts?), or there can be a grid of independent sets of triangulation antennas each working on a different frequency.
or instead of radio, could use ultrasonic location signals? but windspeed must be known?
this could be hell on animals' ears, but perhaps that will keep them away from the roads. not good for pets living very near said roads, though..
Also, a car could have a camera or two (fast FPS) and a computer to in real-time deduce positions and velocities of objects nearby that stand out, then avoid anything that's too big or matches a certain profile (last-second, of course).
this has the following advantages:
-planning collision avoidance can take the environment besides other cars into
account
-it can avoid collisions with other things than just cars.
-it can be used instead of a radio/ultrasonic (relative) positioning system if those are impractical
-it doesn't require other cars to support the same technology.
although it would still help if these cars, independently sensing each others' positions via cameras, could communicate via radio to coordinate their avoidance plans.
perhaps the avoidance algorithm should have some heuristics for dealing with failed tires.
maybe also alternative avoidance branches for people wearing their seatbelt vs. people not wearing their seatbelt. for a person wearing their seatbelt, a head-on collision is better than a collision from their side? and better than a roll? which may not be true of persons without seatbelts on? actually a head-on or rear collision is probably always better now because cars are required to have airbags.
Cars should have RF transponders in them that communicate with all other cars near them instantaneously. GPS wouldn't be sufficent to gather each other's immediate positions and velocities, but radio triangulation might. This data can be used to avoid potential collisions: if two cars detect that they're on a collision path, they can together make an avoidance plan and execute it (with automatic steering and/or breaking). This must be last-second (calculated based on speed, traction, weight of car, direction, etc.) because sometimes cars can appear to be on a collision course but it's just due to the shape of a road, traffic lights, etc.
I don't think attenuation-based or pulse-based radio triangulation would work, but maybe phase-based triangulation could?
How to triangulate several cars at once, though?
--they can coordinate to time their output signals to not happen simultaneously
--there can be a whole matrix of tiny antennas and an algorithm sorts it all out.
--each car can operate on a different frequency. the circuitry then either separates the frequencies and then take the phase info (or does it have to use interferometry to get the phases shifts?), or there can be a grid of independent sets of triangulation antennas each working on a different frequency.
or instead of radio, could use ultrasonic location signals? but windspeed must be known?
this could be hell on animals' ears, but perhaps that will keep them away from the roads. not good for pets living very near said roads, though..
Also, a car could have a camera or two (fast FPS) and a computer to in real-time deduce positions and velocities of objects nearby that stand out, then avoid anything that's too big or matches a certain profile (last-second, of course).
this has the following advantages:
-planning collision avoidance can take the environment besides other cars into
account
-it can avoid collisions with other things than just cars.
-it can be used instead of a radio/ultrasonic (relative) positioning system if those are impractical
-it doesn't require other cars to support the same technology.
although it would still help if these cars, independently sensing each others' positions via cameras, could communicate via radio to coordinate their avoidance plans.
perhaps the avoidance algorithm should have some heuristics for dealing with failed tires.
maybe also alternative avoidance branches for people wearing their seatbelt vs. people not wearing their seatbelt. for a person wearing their seatbelt, a head-on collision is better than a collision from their side? and better than a roll? which may not be true of persons without seatbelts on? actually a head-on or rear collision is probably always better now because cars are required to have airbags.
Econo-automobile
for people who can barely afford some transportation (and can boldly 'live simply') and for saving resources.
for people who can barely afford some transportation (and can boldly 'live simply') and for saving resources.
no auto door locks
no power windows
windows don't crank, just have something to slide them up and down with and lock
in place.
some windows are plastic or acrylic?
no a/c or heating
no radio
dials: odometer, spedometer, fuel
buy battery separately, you might have one laying around.
manual trans.
no power steering
rudimentary shock absorbers - just fancy enough that it won't kill you.
make cylinders fire on both sides, so each one is a double cylinder?
rotary engine?
but shoudl probably use a completely different engine with better efficiency anyway
like those new ones in PopSci.
no back doors
no inside light - bring a flashlight if you need it.
no glove compartment - bring a box if you need it.
instruments are lit just by shining an LED on them
crank to start car?
have model without back seats
seats are cheap.
just a hard surface with some type of gel/water/air-filled sack
(compartmentalized into cells) over it? (this should probably help with the
shock-absorber issue better than a normal, expensive seat would, too.)
can't slide forward/backward, but can adjust tilt of back. just lock and unlock
with notches? (no springs, turning things, or continuously variable positions)
no microchip, those things are a pita anyway.
no paint job, or just something to make the surface legal if it's metal. in
which case, the paint has to stay on but it doesn't have to be consistent or
smooth or be a nice color.
no trunk?
really simple door handles? just push something left or right to latch / unlatch
which also serves as a handle to pull it open? the lock and key mechanism
would just block that sliding thing. or is it cheap enough already to do
it the normal way
can we do away with windshield wipers in lieu of anti-water windshield coating?
contracts with junkyards to use used parts?
could make several different types of fittings for some things to accomodate
different models
maybe could even use used engines, transmissions, bodies
small form
have no back of the car, except for support for the rear wheels, lights and
license plate?
don't skimp on safety
airbags
abs
Labels:
automobiles,
cars,
economy,
environmentalism,
green,
idea
Wednesday, March 25, 2009
Automatic Translation
All language translators out there currently on the web suck.
One idea for a very effective translator would be to feed a self-teaching AI program tons and tons of documents that already have existing translations, and have it automatically generate rules for proper translation. This would *automatically* accommodate for correct grammar, loose grammar, idioms, jargon, etc. This would require *a lot* of computing power, but it only has to be done *once*. Training documents can be found in existing corpora, or translated by hand specifically for the project. Two possible ways to generate rules would be: genetic algorithms, or some sort of exhaustion of many possible rule formulations (this could be bootstrapped with various types of data, for example, a word-sense->part-of-speech key, and a word-sense->popularity-of-use key.). Incidentally a week after I had this idea I heard a couple of people were working on just such a project, but I've yet to see the fruits of their results anywhere..
Rather than determining rules for translating to and from each possible combination of two languages, it's probably best to come up with *one* language that all languages can be translated to/from with no loss. Just making a collation of linguistic categories for words and clauses in each known language, and using these in an X-bar kinda structure, should be enough. Then any given language would be translated to this intermediate language, and then from that to the target language.
This greatly reduces the costs of furnishing texts for the learning algorithm and of the running it. This intermediate language's lexicon should be the superset of all the senses of all the words of the input languages, but with words identical in grammatical function and alike in meaning grouped into synsets, where each word in a synset is linked to each other word in the synset with a particular weight, which is their level of similarity (this may have to be done by hand). A word in a source text would, via a table, point to a word in some synset (if the word has any synonyms), and then the closest word to that (weight-wise), or the word itself, that some word in the target language points to, would be used.
A problem arises when a language possessing a certain tense/aspect/modality is translated to a language not possessing that. Possible solutions are: compromise and translate to a similar tense/aspect/modality that gets the point across, or totally rearrange the semantics of the sentence in the resultant text. This should not be too difficult given that the algorithm fully groks the grammatical structure of the sentence in the first place. Similarly some words won't exist in all languages. They can be handled by: using a similar-enough word that won't cause too much misunderstanding, or substituting the given word with a phrase (or, in some languages, possibly using agglutination).
Obviously I'm not implying that the semantic rearranging or phrase substitution would be wholly "figured out" by the translator; it would rely on a pre-programmed (or self-learned, via particular patterns found in the training texts) ruleset for such rearrangements.
"Similar-enough" words could be implemented using a weight mechanism just like the one used within a synset, but applying cross-synset/non-synonym. (In fact, we might as well just do away with a categorical consideration of synonym sets altogether.. unless lack of a bona fide synonym is used as a cue to look for a phrasal substitute?) Just enough vague linkages have to be drawn to accomodate all combinations of source/target languages. In fact for the sake of laziness, perhaps unlimited-length chains of weight-linkages could be used, when necessary. I suppose this requires a function for generating an overall priority value based on X number of Y-valued weights. For example, would we favor an a--(1)-->b--(2)-->c link, or an a--(3)-->d link? (1 means highest priority, because there is no bottom limit.) In this case, it would do to specify weights in arbitrary decimals rather than simply orders of priority.
We could effectively have myriad already-made translation texts available for training in this one-language approach, by creating a pretty darn good English<->the-one-language translator, and then using texts translated to and from English (it probably being the most widely translated language), with the English part being converted to/from the one language for the purposes of training. It remains in question how much trouble we should go through, if any, to make the program aware of whether a given training pair was actively translated from X to Y or from Y to X. This goes for the non-one-language approach also.
Machine learning may not be necessary: humans could perhaps construct perfectly comprehensive BNF notations (including idioms) and use a GLR parser, but I don't know how well this will work for (not-so-atypical) taking of grammatical liberties. If this approach is taken, the machine should obviously be programmed to understand affixes so that base words and inflections can be deduced for inflected words that aren't specifically in any dictionary. Another possible adaptation could be Damerau–Levenshtein distance or similar, to account for typos, misspellings, spelling variants, and OCR miscalculations. Also, a list of commonly misused words might also be helpful, though maybe not.
One trick to this translating could be to resolve ambiguous meanings, or connotations, of words in a sentence based on surrounding sentences. Meaning that if the word is used in such a way in a surrounding sentence that it definitely, or probably, means this or that, then we can induce that it probably means this in the given sentence, too. It could even be determined (by the given sentence or by a surrounding sentence) based on some pattern recognitions afforded by the training process. (These may even include subtle and holistic inferences.)
Meaning and grammar resolution can go both ways: grammar can help determine the sense of a word, and a known word sense could help determine the grammar of a sentence.
Connotation inferences (whether being done as-such, or effectively for consideration purposes but not internally tokenized on that level, per se) can even help determine the most germane translation synonym.
We *may* want to even layer our conferencing of meaning-resolution amongst sentences according to paragraph, chapter, document, author/source, and/or genre, but that's probably overkill, beyond just having a sentence-level tier and a document-level tier. Actually genre and source seem to be good too, since they're categories where you'd find words used in particular ways. Oh, I guess a sub-sentence-level tier could be relevant too (because the word could be used twice in the same sentence), but this layer would be treated a little differently of course, since self-contained syntax trees (mostly) start and end at the sentence level.
People can arbitrarily create new words on-the-fly in an agglutinating language. This would be hard for a translator to automatically substitute with defining phrases..but it would be easy to simply use a form of pseudo-agglutination in the given target language; for example, if poltergeist weren't already a well-known and established word, it would be translated into English as "rumble-ghost" or "rumble-spirit." Perhaps a little awkward, but I think it's pretty effective for getting a point across.
All language translators out there currently on the web suck.
One idea for a very effective translator would be to feed a self-teaching AI program tons and tons of documents that already have existing translations, and have it automatically generate rules for proper translation. This would *automatically* accommodate for correct grammar, loose grammar, idioms, jargon, etc. This would require *a lot* of computing power, but it only has to be done *once*. Training documents can be found in existing corpora, or translated by hand specifically for the project. Two possible ways to generate rules would be: genetic algorithms, or some sort of exhaustion of many possible rule formulations (this could be bootstrapped with various types of data, for example, a word-sense->part-of-speech key, and a word-sense->popularity-of-use key.). Incidentally a week after I had this idea I heard a couple of people were working on just such a project, but I've yet to see the fruits of their results anywhere..
Rather than determining rules for translating to and from each possible combination of two languages, it's probably best to come up with *one* language that all languages can be translated to/from with no loss. Just making a collation of linguistic categories for words and clauses in each known language, and using these in an X-bar kinda structure, should be enough. Then any given language would be translated to this intermediate language, and then from that to the target language.
This greatly reduces the costs of furnishing texts for the learning algorithm and of the running it. This intermediate language's lexicon should be the superset of all the senses of all the words of the input languages, but with words identical in grammatical function and alike in meaning grouped into synsets, where each word in a synset is linked to each other word in the synset with a particular weight, which is their level of similarity (this may have to be done by hand). A word in a source text would, via a table, point to a word in some synset (if the word has any synonyms), and then the closest word to that (weight-wise), or the word itself, that some word in the target language points to, would be used.
A problem arises when a language possessing a certain tense/aspect/modality is translated to a language not possessing that. Possible solutions are: compromise and translate to a similar tense/aspect/modality that gets the point across, or totally rearrange the semantics of the sentence in the resultant text. This should not be too difficult given that the algorithm fully groks the grammatical structure of the sentence in the first place. Similarly some words won't exist in all languages. They can be handled by: using a similar-enough word that won't cause too much misunderstanding, or substituting the given word with a phrase (or, in some languages, possibly using agglutination).
Obviously I'm not implying that the semantic rearranging or phrase substitution would be wholly "figured out" by the translator; it would rely on a pre-programmed (or self-learned, via particular patterns found in the training texts) ruleset for such rearrangements.
"Similar-enough" words could be implemented using a weight mechanism just like the one used within a synset, but applying cross-synset/non-synonym. (In fact, we might as well just do away with a categorical consideration of synonym sets altogether.. unless lack of a bona fide synonym is used as a cue to look for a phrasal substitute?) Just enough vague linkages have to be drawn to accomodate all combinations of source/target languages. In fact for the sake of laziness, perhaps unlimited-length chains of weight-linkages could be used, when necessary. I suppose this requires a function for generating an overall priority value based on X number of Y-valued weights. For example, would we favor an a--(1)-->b--(2)-->c link, or an a--(3)-->d link? (1 means highest priority, because there is no bottom limit.) In this case, it would do to specify weights in arbitrary decimals rather than simply orders of priority.
We could effectively have myriad already-made translation texts available for training in this one-language approach, by creating a pretty darn good English<->the-one-language translator, and then using texts translated to and from English (it probably being the most widely translated language), with the English part being converted to/from the one language for the purposes of training. It remains in question how much trouble we should go through, if any, to make the program aware of whether a given training pair was actively translated from X to Y or from Y to X. This goes for the non-one-language approach also.
Machine learning may not be necessary: humans could perhaps construct perfectly comprehensive BNF notations (including idioms) and use a GLR parser, but I don't know how well this will work for (not-so-atypical) taking of grammatical liberties. If this approach is taken, the machine should obviously be programmed to understand affixes so that base words and inflections can be deduced for inflected words that aren't specifically in any dictionary. Another possible adaptation could be Damerau–Levenshtein distance or similar, to account for typos, misspellings, spelling variants, and OCR miscalculations. Also, a list of commonly misused words might also be helpful, though maybe not.
One trick to this translating could be to resolve ambiguous meanings, or connotations, of words in a sentence based on surrounding sentences. Meaning that if the word is used in such a way in a surrounding sentence that it definitely, or probably, means this or that, then we can induce that it probably means this in the given sentence, too. It could even be determined (by the given sentence or by a surrounding sentence) based on some pattern recognitions afforded by the training process. (These may even include subtle and holistic inferences.)
Meaning and grammar resolution can go both ways: grammar can help determine the sense of a word, and a known word sense could help determine the grammar of a sentence.
Connotation inferences (whether being done as-such, or effectively for consideration purposes but not internally tokenized on that level, per se) can even help determine the most germane translation synonym.
We *may* want to even layer our conferencing of meaning-resolution amongst sentences according to paragraph, chapter, document, author/source, and/or genre, but that's probably overkill, beyond just having a sentence-level tier and a document-level tier. Actually genre and source seem to be good too, since they're categories where you'd find words used in particular ways. Oh, I guess a sub-sentence-level tier could be relevant too (because the word could be used twice in the same sentence), but this layer would be treated a little differently of course, since self-contained syntax trees (mostly) start and end at the sentence level.
People can arbitrarily create new words on-the-fly in an agglutinating language. This would be hard for a translator to automatically substitute with defining phrases..but it would be easy to simply use a form of pseudo-agglutination in the given target language; for example, if poltergeist weren't already a well-known and established word, it would be translated into English as "rumble-ghost" or "rumble-spirit." Perhaps a little awkward, but I think it's pretty effective for getting a point across.
Monday, March 23, 2009
Better Home Networking
Any time an application tries to listen on a port, and the firewall allows it and has the option set to do this, the OS should send a special control signal (such as uPnP port forwarding) to the router for it to forward that port to the listening PC. When the port is no longer being listened on the OS can tell the router to un-forward that port. If that port is already being forwarded to another PC on the LAN, then the socket listen command could return an error.
This way multiple PCs on a LAN can occasionally use the same port numbers without having to explicitly set up/change port forwarding in the router setup, or one PC can always use that port without the user having to manually set anything up.
Any time an application tries to listen on a port, and the firewall allows it and has the option set to do this, the OS should send a special control signal (such as uPnP port forwarding) to the router for it to forward that port to the listening PC. When the port is no longer being listened on the OS can tell the router to un-forward that port. If that port is already being forwarded to another PC on the LAN, then the socket listen command could return an error.
This way multiple PCs on a LAN can occasionally use the same port numbers without having to explicitly set up/change port forwarding in the router setup, or one PC can always use that port without the user having to manually set anything up.
Labels:
home networking,
LAN,
networking,
port forwarding,
ports,
uPnP
I don't think I posted this one yet..
Universal Programming Syntax
Any programming language syntax can basically be decomposed into nested lists. Something like XML. The lists would include parameters, keywords, function calls, whatever. It's essentially taking every structure and ordering its terms in the same way and conveying every operator to hierarchical structure or keywords. The idea is that if we could make a generalized language grammar, somewhat like XML but easier to type/read and perhaps more rich with structures, we could express any programming language is this form. That way learning a new language would be much easier, because you don't have to learn a new syntax or grammar--merely its constructs and functions--and also you wouldn't have to put up with really ugly syntax.
It isn't necessarily that every new language would have to use this specification, but that people could write front-ends that can convert from this specification to given languages and back, preferably as IDE plugins.
How exactly this language should be designed is hypothetical--I could take a shot at it, but that doesn't mean that my suggestion for a universal language is inextricably linked to my particular idea of an implementation of it.
One thing that comes to mind is that, although every nested structure in the program could be nested in the universal language in the same way, that could make it much less readable.
take, for example: for(int x=0;x<=255 && !y;x++) {do_this(exp((x+1),2)+3); }
you could write it as
and the above works okay for control structures, but is horrible for and's and or's and math--basically any operators.
and on the other hand, you could do it like this:
for(int(x,0), and(le(x,255),not(y)), inc x, function(do_this, sum(exp(sum(x,1), 2), 3))))
which is a little better for operators, but isn't so good for control structures.
and, of course, you could simply allow arbitrary line breaks and do it like this
but that still could be made a little bit more elegant, by allowing two forms of nesting:
(there indentation is being used as a grouping mechanism.)
and even futher, we could be more kind to operators, and technically we wouldn't even be changing the definition of the universal language:
although it might do to make some standards about how things in lists are ordered, so for example, you can't have the function/operator name be the 4th element in the list unless there are only three elements in which case it's the first element but only on tuesdays and depending on the price of beans as declared earlier in the source.
one thing we should not allow, though, is inexplicit priority of operators. all nesting should be explicit, that way you don't have to worry about learning the order of precedence for the particular language or thinking about it when you interpret some source code. exceptions maybe should be made, though, for basic numerical operators. i.e., everyone learns in elementary school or junior high that it goes: explicit grouping, then ^, then * and /, then + and -. although it's still on the table whether or not symbolic operators should be allowed in the specification. in some cases it makes it more readable, in other cases words would make their meaning more obvious. one solution would be to allow only >, <, <=, >=, *, /, +, -, . (namespaces), and either <> or !=. ^ shouldn't be allowed since it means exponent in some languages and XOR in others. and % can mean percent, modulus, string interpolation, etc. i'm being strict about it to make it easier for those who haven't done any learning of the language, although it could, perhaps, be made a language intended for people who do a little bit of studying. but that could make it a little more concise but a little less 'accessible'..
while it's up to whomever to specify how a particular language is translated into the universal language, there should probably be some guidelines set to foster consistency at little cost. for example, for loops exist in most every language, and we could dictate that for loops should start with the name 'for' as the first item. which they would probably do anyway, but perhaps there are other cases that are less normative. and more than just the 'for' would be specified.
common elements of a for loop include:
different languages would use different items of that list. each item could be given an official name, and a language uses whichever items are appropriate. it would be somewhat like the first example of code in this text, rather than the later examples where i just allowed positions in the list to determine meanings.
obviously mechanisms for literal strings and also comments need to be included. i'm a fan of Python's flexibility when it comes to literals. for comments i like C, I think they visually stand out well as being extraneous to the code. even moreso if it's all //'s but then you need an editor that can block comment and uncomment for convenience.
you may have noticed that i pulled some tricks with being able to use spaces to separate list items in some cases and commas in others. basically i tried to allow as much flexibility for the programmer in that as possible while maintaining that it can be interpreted determinately. so the three levels of separators/grouping would be spaces, commas and newlines, but they can be shifted up or down at whim. and parentheses can help too
i suppose other things that really demand symbols are dereferencers and subscripts. moreso dereferencers, because
a[10] can be handled as (a 10), a 10 or a(10), or even a sub 10, but dereferencers might be get tedious with having to type ptr ptr a, ptr ptr (ptr b), etc. however, instead of doing that we can do this: p2 a, p2(p b), etc. or _p _p (_p b) isn't too bad anyway. Should we have a mechanism for distinguishing language keywords from arbitrary names? this mechanism should probably be some non-enforced kind of Hungarian notation defined by the language translator. for example, key words could always be all caps.
another remaining issue is string literals. in what universal way should they be implemented? I would go for Python's syntax, with the possible exception that the 'u' modifier might become superfluous, as we could make everything always unicode, then translate to ascii or other encodings when necessary in the language translation. also we could add PHP's nowdoc syntax.
one other issue: the plain list vs. named sections formats, for example the way i did the 'for' command the first time vs. the subsequent times. should the language itself determine which one one uses, or should the user be able to use both styles for any given language? the parser could specify the components needed in a way similar to Python's defining function parameters, such that arguments may passed name, or just listed, and if particular grammar allows then names can even be passed that weren't pre-defined.
for those familiar with compiler technologies, yes, this is basically just a flexible, human-friendly way of specifying abstract syntax trees.
Universal Programming Syntax
Any programming language syntax can basically be decomposed into nested lists. Something like XML. The lists would include parameters, keywords, function calls, whatever. It's essentially taking every structure and ordering its terms in the same way and conveying every operator to hierarchical structure or keywords. The idea is that if we could make a generalized language grammar, somewhat like XML but easier to type/read and perhaps more rich with structures, we could express any programming language is this form. That way learning a new language would be much easier, because you don't have to learn a new syntax or grammar--merely its constructs and functions--and also you wouldn't have to put up with really ugly syntax.
It isn't necessarily that every new language would have to use this specification, but that people could write front-ends that can convert from this specification to given languages and back, preferably as IDE plugins.
How exactly this language should be designed is hypothetical--I could take a shot at it, but that doesn't mean that my suggestion for a universal language is inextricably linked to my particular idea of an implementation of it.
One thing that comes to mind is that, although every nested structure in the program could be nested in the universal language in the same way, that could make it much less readable.
take, for example: for(int x=0;x<=255 && !y;x++) {do_this(exp((x+1),2)+3); }
you could write it as
for:
initial:
declare int x 0
compare:
and:
le x 255
not: y
action:
inc x
do:
function do_this:
sum:
exponent:
sum: x 1
2
3
and the above works okay for control structures, but is horrible for and's and or's and math--basically any operators.
and on the other hand, you could do it like this:
for(int(x,0), and(le(x,255),not(y)), inc x, function(do_this, sum(exp(sum(x,1), 2), 3))))
which is a little better for operators, but isn't so good for control structures.
and, of course, you could simply allow arbitrary line breaks and do it like this
for(
int x 0,
and(le(x,255),not y),
inc x,
function(do_this, sum(exp(sum(x,1), 2), 3))
)
but that still could be made a little bit more elegant, by allowing two forms of nesting:
for:
int x 0
and(le(x,255),not y)
inc x
function(do_this, sum(exp(sum(x,1), 2), 3))
(there indentation is being used as a grouping mechanism.)
and even futher, we could be more kind to operators, and technically we wouldn't even be changing the definition of the universal language:
for:
int x 0
((x le 255) and not y)
inc x
function(do_this, ((x plus 1) exp 2) plus 3)
although it might do to make some standards about how things in lists are ordered, so for example, you can't have the function/operator name be the 4th element in the list unless there are only three elements in which case it's the first element but only on tuesdays and depending on the price of beans as declared earlier in the source.
one thing we should not allow, though, is inexplicit priority of operators. all nesting should be explicit, that way you don't have to worry about learning the order of precedence for the particular language or thinking about it when you interpret some source code. exceptions maybe should be made, though, for basic numerical operators. i.e., everyone learns in elementary school or junior high that it goes: explicit grouping, then ^, then * and /, then + and -. although it's still on the table whether or not symbolic operators should be allowed in the specification. in some cases it makes it more readable, in other cases words would make their meaning more obvious. one solution would be to allow only >, <, <=, >=, *, /, +, -, . (namespaces), and either <> or !=. ^ shouldn't be allowed since it means exponent in some languages and XOR in others. and % can mean percent, modulus, string interpolation, etc. i'm being strict about it to make it easier for those who haven't done any learning of the language, although it could, perhaps, be made a language intended for people who do a little bit of studying. but that could make it a little more concise but a little less 'accessible'..
while it's up to whomever to specify how a particular language is translated into the universal language, there should probably be some guidelines set to foster consistency at little cost. for example, for loops exist in most every language, and we could dictate that for loops should start with the name 'for' as the first item. which they would probably do anyway, but perhaps there are other cases that are less normative. and more than just the 'for' would be specified.
common elements of a for loop include:
initialization
comparison
incrementation or whatever
variable name(s)
list you're selecting from
what to do
different languages would use different items of that list. each item could be given an official name, and a language uses whichever items are appropriate. it would be somewhat like the first example of code in this text, rather than the later examples where i just allowed positions in the list to determine meanings.
obviously mechanisms for literal strings and also comments need to be included. i'm a fan of Python's flexibility when it comes to literals. for comments i like C, I think they visually stand out well as being extraneous to the code. even moreso if it's all //'s but then you need an editor that can block comment and uncomment for convenience.
you may have noticed that i pulled some tricks with being able to use spaces to separate list items in some cases and commas in others. basically i tried to allow as much flexibility for the programmer in that as possible while maintaining that it can be interpreted determinately. so the three levels of separators/grouping would be spaces, commas and newlines, but they can be shifted up or down at whim. and parentheses can help too
i suppose other things that really demand symbols are dereferencers and subscripts. moreso dereferencers, because
a[10] can be handled as (a 10), a 10 or a(10), or even a sub 10, but dereferencers might be get tedious with having to type ptr ptr a, ptr ptr (ptr b), etc. however, instead of doing that we can do this: p2 a, p2(p b), etc. or _p _p (_p b) isn't too bad anyway. Should we have a mechanism for distinguishing language keywords from arbitrary names? this mechanism should probably be some non-enforced kind of Hungarian notation defined by the language translator. for example, key words could always be all caps.
another remaining issue is string literals. in what universal way should they be implemented? I would go for Python's syntax, with the possible exception that the 'u' modifier might become superfluous, as we could make everything always unicode, then translate to ascii or other encodings when necessary in the language translation. also we could add PHP's nowdoc syntax.
one other issue: the plain list vs. named sections formats, for example the way i did the 'for' command the first time vs. the subsequent times. should the language itself determine which one one uses, or should the user be able to use both styles for any given language? the parser could specify the components needed in a way similar to Python's defining function parameters, such that arguments may passed name, or just listed, and if particular grammar allows then names can even be passed that weren't pre-defined.
for those familiar with compiler technologies, yes, this is basically just a flexible, human-friendly way of specifying abstract syntax trees.
Tuesday, March 17, 2009
Many Little Turings FTW
The latest Intel chips (Xeon, I7 Core, etc.) have 700-800 million transistors. If one were to make the simplest logic-gate setup possible to run a Universal Turing Machine -- or not necessarily the simplest but making a few simplicity-vs.-efficacy trade-offs --, then parallel-scale it to around 800 million transistors, we could possibly have thousands, maybe even hundreds of thousands, of general-purpose CPU cores at once running in a single half-an-inch CPU.
Graphics cards, the current latest ones pushing 1 teraFLOPS, do something similar to this but their cores (or more accurately, stream processors) are not Turing-complete -- that is, they can't do general computing --, and they probably still take orders of magnitude more transistors per "core" (because they perform many sophisticated kinds of manipulations, such as floating-point instructions and specifically CGI-related functions, directly), thus meaning far fewer parallel units than this computer could have.
Multiplying two 16-bit numbers, for example, might take over 256 cycles, and adding two numbers could take at least 16, but with thousands/hundreds of thousands of cores it might all be worth it. (I come up with the values 256 and 16 because I'm thinking this machine would have no intrinsic conception of bytes and would process everything a bit at a time, but perhaps it wouldn't be that way.)
This idea is for a computer intended for very specialized applications, not general home computing as with a PC; average PC users have no use for thousands of simultaneous general-purpose cores.
It might be neat if we could actually make this self-programmable in a way, perhaps like a Turing machine with a self-modifiable state table. This kind of dynamicism would be very useful for a machine with such a limited instruction set. Maybe an AND could be re-written as an XOR, and so on. I'm not sure how that can be done in a way that doesn't incur so many extra transistors that they'd be better put to use by simply extending the instruction set andor adding more cores. Perhaps there can be intermediate router circuits that can re-route the flow of bits through the core's parts -- for example, through an AND gate instead of through an XOR gate. Or, perhaps there can be a number of different types of cores that have slightly different functionality, and the code determines which type of core a given thread goes to. Maybe a thread can even switch core-types mid-way through; its state information would be transferred to another core. (this could be effectively the same as reprogramming a core, but more limited.) The proportions of numbers of cores available for the various core-types would hopefully be determined by the most common use-case scenarios.
The Tile64 had a good idea for inter-core communication: each core has its own router, and a message passing from one core to another steps through each core in between to get there. This may be a reasonable way to do it, although it seems like it would be a rather complex thing for a system going for (per-core) minimalism. Perhaps the system could have very localized clouds of shared memory for X number of cores, and then those clouds of memory can themselves communicate with other clouds of memory. The CPU's scheduler would group tasks according to which tasks need to either communicate a lot with or share memory with which other tasks, or perhaps the operating system itself could. If the different-type-of-cores idea is used, then each localized core-cloud should probably contain a variety of different core-types. That way threads can more quickly switch core-type, and super-tasks involving different kinds of functions can more easily have their different functions inter-communicate -- though the second argument can go either way; perhaps some features should be locally grouped and some not.
Anyway, this is all on one chip. If we could have 100,000 cores on one chip, imagine how many petaFLOPS a supercomputer made up of these chips would do!
The latest Intel chips (Xeon, I7 Core, etc.) have 700-800 million transistors. If one were to make the simplest logic-gate setup possible to run a Universal Turing Machine -- or not necessarily the simplest but making a few simplicity-vs.-efficacy trade-offs --, then parallel-scale it to around 800 million transistors, we could possibly have thousands, maybe even hundreds of thousands, of general-purpose CPU cores at once running in a single half-an-inch CPU.
Graphics cards, the current latest ones pushing 1 teraFLOPS, do something similar to this but their cores (or more accurately, stream processors) are not Turing-complete -- that is, they can't do general computing --, and they probably still take orders of magnitude more transistors per "core" (because they perform many sophisticated kinds of manipulations, such as floating-point instructions and specifically CGI-related functions, directly), thus meaning far fewer parallel units than this computer could have.
Multiplying two 16-bit numbers, for example, might take over 256 cycles, and adding two numbers could take at least 16, but with thousands/hundreds of thousands of cores it might all be worth it. (I come up with the values 256 and 16 because I'm thinking this machine would have no intrinsic conception of bytes and would process everything a bit at a time, but perhaps it wouldn't be that way.)
This idea is for a computer intended for very specialized applications, not general home computing as with a PC; average PC users have no use for thousands of simultaneous general-purpose cores.
It might be neat if we could actually make this self-programmable in a way, perhaps like a Turing machine with a self-modifiable state table. This kind of dynamicism would be very useful for a machine with such a limited instruction set. Maybe an AND could be re-written as an XOR, and so on. I'm not sure how that can be done in a way that doesn't incur so many extra transistors that they'd be better put to use by simply extending the instruction set andor adding more cores. Perhaps there can be intermediate router circuits that can re-route the flow of bits through the core's parts -- for example, through an AND gate instead of through an XOR gate. Or, perhaps there can be a number of different types of cores that have slightly different functionality, and the code determines which type of core a given thread goes to. Maybe a thread can even switch core-types mid-way through; its state information would be transferred to another core. (this could be effectively the same as reprogramming a core, but more limited.) The proportions of numbers of cores available for the various core-types would hopefully be determined by the most common use-case scenarios.
The Tile64 had a good idea for inter-core communication: each core has its own router, and a message passing from one core to another steps through each core in between to get there. This may be a reasonable way to do it, although it seems like it would be a rather complex thing for a system going for (per-core) minimalism. Perhaps the system could have very localized clouds of shared memory for X number of cores, and then those clouds of memory can themselves communicate with other clouds of memory. The CPU's scheduler would group tasks according to which tasks need to either communicate a lot with or share memory with which other tasks, or perhaps the operating system itself could. If the different-type-of-cores idea is used, then each localized core-cloud should probably contain a variety of different core-types. That way threads can more quickly switch core-type, and super-tasks involving different kinds of functions can more easily have their different functions inter-communicate -- though the second argument can go either way; perhaps some features should be locally grouped and some not.
Anyway, this is all on one chip. If we could have 100,000 cores on one chip, imagine how many petaFLOPS a supercomputer made up of these chips would do!
Labels:
cpu,
flops,
parallel computing,
petaflops,
teraflops,
turing machine
Subscribe to:
Posts (Atom)