<$BlogRSDUrl$>

Esos aparatos del demonio

Mis notas sobre lo que voy leyendo de ordenadores y periféricos

miércoles, enero 10, 2007

La era paralela 


En Ars Technica anuncian que Intel ha sacado a la venta los quad-core. Pero lo más interesante del artículo es que enlazan una entrevista con Hennessy y Patterson. Por si alguno de los pocos lectores que le deben quedar a este blog que tanto se actualiza no lo sabe, son famosos por escribir uno de los clásicos de la Informática: «Computer Architecture: A Quantitative Approach». En sus ratos libres, Hennessy creó el procesador MIPS (que acabó siendo el corazón de la Playstation, además de muchos otros procesadores), la correspondiente empresa y últimamente se entretiene siendo el presidente de la Universidad de Stanford. El Patterson ese da clases en otra universidad que dicen que no está mal, Berkeley, y participó en la invención de conceptos como RISC o RAID. Como se ve, le gustan los acrónimos de cuatro letras que empiezan por erre.

En la entrevista repasan el pasado, el presente y el futuro de la arquitectura de computadores.

Este par de párrafos me parecen un buen resumen del pasado y el presente:


JH I think this is nothing less than a giant inflection point, if you look strictly from an architectural viewpoint - not a technology viewpoint. [...] If you look at architecture-driven shifts, then this is probably only the fourth. There's the first-generation electronic computers. Then I would put a sentinel at the IBM 360, which was really the beginning of the notion of an instruction-set architecture that was independent of implementation.

I would put another sentinel marking the beginning of the pipelining and instruction-level parallelism movement. Now we're into the explicit parallelism multiprocessor era, and this will dominate for the foreseeable future.


¡Es un momento tan interesante que dicen que es posible que un recién graduado de Berkeley o Standford haga mejores procesadores que Intel!


Back in the '80s [...] I think the graduate students at Berkeley, Stanford, and elsewhere could genuinely build a microprocessor that was faster than what Intel could make, and that was amazing.

Now, I think today this shift toward parallelism is being forced not by somebody with a great idea, but because we don't know how to build hardware the conventional way anymore. This is another brand-new opportunity for graduate students at Berkeley and Stanford and other schools to build a microprocessor that's genuinely better than what Intel can build. And once again, that is amazing.


Comentan que se apostó por el paralelismo a nivel de instrucción y que se llegó antes de lo esperado a su límite. Ahora toca el paralelismo explícito, que como dice en un artículo que ya comenté hace un par de años, es el fin de la mejora automática en la velocidad de los programas:


Remember that this era is going to be about exploiting some sort of explicit parallelism, and if there's a problem that has confounded computer science for a long time, it is exactly that. Why did the ILP revolution take off so quickly? Because programmers didn't have to know about it. Well, here's an approach where I suspect any way you encode parallelism, even if you embed the parallelism in a programming language, programmers are going to have to be aware of it, and they're going to have to be aware that memory has a distributed model and synchronization is expensive and all these sorts of issues.

[...]

DP Architecture is interesting again. From my perspective, parallelism is the biggest challenge since high-level programming languages. It's the biggest thing in 50 years because industry is betting its future that parallel programming will be useful.

Industry is building parallel hardware, assuming people can use it. And I think there's a chance they'll fail since the software is not necessarily in place. So this is a gigantic challenge facing the computer science community. If we miss this opportunity, it's going to be bad for the industry.

Imagine if processors stop getting faster, which is not impossible. Parallel programming has proven to be a really hard concept. Just because you need a solution doesn't mean you're going to find it.


Por último, hablan del dinero dedicado que dedica el gobierno a la investigación. Patterson, que de esas cosas debe de saber algo (en sus ratos libres también fue presidente de ACM dos años y hay que tener en cuenta que el anterior presidente de Standford está ahora en el gobierno: ¡es Condolezza Rice!) dice:


DP I'm worried about funding for the whole field. As ACM's president for two years, I spent a large fraction of my time commenting about the difficulties facing our field, given the drop in funding by certain five-letter government agencies. They just decided to invest it in little organizations like IBM and Sun Microsystems instead of the proven successful path of universities.


Obviamente, son hombres de universidad. Hennessy apostilla:


JH [...] when we start talking about parallelism and ease of use of truly parallel computers, we're talking about a problem that's as hard as any that computer science has faced. It's not going to be conquered unless the research program has a level of long-term commitment and has sufficiently significant segments of strategic funding to allow people to do large experiments and try ideas out.



Lectura recomendada, porque se les nota la pasión en lo que hacen, para todo aquel al que le interese la arquitectura de computadores.

(Ah, por si alguien no lo sabe, Apple ha anunciado un bicho llamado iPhone.)

Comentarios:

This page is powered by Blogger. Isn't yours?

Blogroll
Enlaces
Archivos

Licencia Creative Commons
Este trabajo tiene licencia Creative Commons License.