5。公用事业运算 (Utility Computing) 。公用事业运算是从 Web服务的网上租用功能引发出来的。 (陈余年,“ Web服务的租用本质  计算机世界,200285 )  Web服务提供服务,一般是非无偿的。用户可在网上租用服务如大型软件或服务器或存储设备;这比购买便宜得多。租来后,不管用或不用,租金照出。要能象水、电和电话等公用事业那样用多少,付多少,就更好了。公用事业运算就这样应运而生的。公用事业运算是,把计算机的处理能力(CPU) 、数据(文件及数据库)以及过程(代码) 等运算资源作为服务,在 Internet  上供用,按公用事业的收费方式,用多少,付多少 (pay as you go) 。这里宜说明一下,此处讲运算资源既指硬件,又指软件;讲CPU显然同时指计算机;讲数据同时也指存储设备。

            IBM 是搞公用事业运算最积极的一家,称之为要用就有运算 (On-Demand Computing) IBM 开展此项业务,是从2001年开展网格运算(见下附文及方美琪,网格计算——人类思维能力的进一步发展”)开始的。那时 IBM 承担英国几家大学间全英网格运算网的建立,以及把荷兰国内5个大学联网于网格运算。与此同时,IBM也已在研发后来叫做要用就有运算的公用事业运算。该公司于200210月宣布首先出资百万美元研发要用就有运算项目。此项目首先是基于网格运算的,把世界上IBM所属计算机运算资源都连接起来,共享共用,形成一单一的虚拟计算机。一般网格运算连接在一起的资源共享共用的计算机都是所有权各有所属,例如SETI@Home所连接的都是各组织和个人所有的计算机。IBM  用网格运算工艺连接起来的则全属 IBM 所有;这里运用格运算工艺之不同,宜注意。把这么多的不同质东西连在一起,照样有集成问题;此项目乃用Web服务标准使应用交互操作。要用就有运算网是个复杂的分布网络,IBM 将其自主运算(autonomic computing) 用于此网;自主运算是指能自我管理和能响应各式各样的情况的系统。IBM 把这要用就有运算项目视为一笔大生意;要用就有运算项目作为资源工厂,供客采用,是大有利可图的。

            所以,IBM 要用就有运算是基于格运算而建的,并有增建,即用Web服务标准使应用交互操作,用自主运算使系统能自我管理和能响应各式各样的情况,以及采用开放标准,包括以 Linux 为平台,有公共接口,便于在不同质环境下添装业务所需组件,或取消不要的组件

                   格运算是公用事业运算的关键工艺基础。当前Internet是用于通信,主要是电子邮件和及时通讯(instant messaging) ;而 Web则为 Internet的检索系统,以便计算机用户访问文本、图像和音乐。格运算的出现就象 Web Internet 上的信息引擎一样,变成 Internet 上的运算引擎。以此为基础的公用事业运算的好处有,1) 由于资源共享共用,所有权成本就会降低;2) 可利用闲着的资源,因而资源的利用率即可提高;3)  格运算能把复杂的计算量大的应用的运行工作量分配于各指定而可用的服务器和台式机上去,因而就可降低应用和系统的复杂性。

              IBM 在美国内经常在电视上做广告的  e-Business On Demand (要用就有的电子商务) 就是基于格运算、Web 服务、主动运算、开放标准和公用事业运算而建的。现在除 IBM 外,还有 Sun   HP 也在推行公用事业运算。Sun宣布一个叫做N!的项目,准备研制工艺,以便公司把服务器、存储设备系统及网络之类的数据资源中心合起来使用,作为单一实体来操作。HP出售一种叫 Utility Data Center的工艺,可供公司运行多个应用时,用以交换与共用数据中心资源。该公司还提供用多少付多少方式,供顾客使用 IT资源。

              公用事业运算的运用还缺不少东西,譬如把运算资源动态分配于各应用的软件就未完全成熟。

 

 

 

 

附文

  • "The Grid: Computing Without Bounds"
    Scientific American (04/03) Vol. 288, No. 4, P. 78; Foster, Ian

Grid computing is expected to "virtualize" general computational services and make processing, storage, data, and software so ubiquitous that computing will seem like just another utility. An extension of the Internet, grid computing melds computer systems through high-speed networks so that people can avail themselves of data-crunching capabilities and resources otherwise inaccessible from single or sometimes multiple computers; grid systems' reach would be worldwide thanks to shared languages and interaction protocols. Grid technology applications include large-scale scientific and business ventures between members of virtual organizations (VOs), experimentation from afar, and high-performance distributed computing and data analysis. A pervasive computing grid would, for instance, enable e-commerce enterprises to customize information and computing systems according to demand while maintaining their connections to partners, suppliers, and customers; give physicians the ability to remotely access medical records for fast diagnosis; accelerate drug candidate screening; and allow civil engineers to test earthquake-proof designs much faster. Businesses are enthusiastic about grid computing because it promises to relieve them of the time and money spent installing, upgrading, and maintaining private computer systems that are often incompatible, resulting in improved security, reliability, and economies of scale for producers, more resource optimization for distributors, and new remotely powered devices and applications for consumers. Argonne National Laboratory's Globus Project, one of the earliest grid computing efforts, involved software that connected far-flung systems into a VO scheme by standardizing ID authentication, activity request authorization, and other key processes. Its success and subsequent development has inspired work on other grid technology projects, such as the National Technology Grid. Grid computing can only be successful if it is widely adopted, and one way of ensuring this is to make core technology freely available as well as easily and openly deployable.

 

 

Why HP might be your next utility company
By David Berlind, Tech Update
April 24, 2003 4:45 AM PT

In the world of utility computing---where compute power is made available on-demand much the same way we get our electricity---the hyperbole among solutions providers vying for the spotlight has reached critical mass. HP, IBM, Sun and others are offering pay-as-you-go service plans, charging for compute cycles as though they were electricity.

HP is taking an approach that grows out of its wholly owned financial services subsidiary, charging by MIPS (millions of instructions per second-- the basic units of raw processing power) the way a power utility would charge by the kilowatt-hour. Leading the way is Irv Rothman, the president and CEO of HP Financial Services.

Traditionally, subsidiaries like HP Financial Services have provided buyers with financing options when it comes to acquiring or leasing expensive information technology assets. It's not unlike what CAT Financial does for buyers of Caterpillar construction equipment, or what GMAC Financial Services does for buyers of General Motors vehicles. So, it's only fitting that any new financial framework that helps technology buyers manage the total cost of technology ownership should fall within Rothman's jurisdiction.

The latest such framework to come out of HP Financial Services is what Rothman refers to as a pay-per-use model. Although not exactly an electricity model, HP's pay-per-use replaces a typical lease and allows IT departments to avoid having to buy as much "system" as necessary to accommodate peak loads.

According to Rothman, the fundamentals of the pay-per-use program are straightforward. First, it's only available for HP's Unix-based SuperDome servers, and the charges are based on percentage of CPU utilization. The equipment is installed behind the customer's firewall, and the customer is required to guarantee a minimum payment based on 25 percent CPU utilization on a 24/7 uptime basis. Price structure is different for every contract because it's dependent on the amount of money that HP Financial Services must borrow to configure the system, and the interest rate at which that money is acquired.

More compelling to me, however, is HP's vision for utility computing, and whether the company has plans to extend the idea to other services, other operating systems, and other business models. The answer is yes, yes and yes, and here's where it starts to get interesting.

Rothman noted that HP was already in the pay-per-use storage business. A natural next step for utility-based pricing, according to Rothman, would be into other areas of business where HP has a significant market footprint. "Right now," said Rothman, "we're looking at offering [pay-per-use] on commercial imaging and printing."

I asked Rothman if utility pricing would be available for other operating systems like Windows or Linux, and if he envisioned a time when CPU capacity would be available on a pay-per-use model over the Internet. Rothman referred me to HP's director of utility computing Nick van der Zweep, who introduced me to a new acronym-ICOD (for Instant Capacity on Demand).

"Our vision is that some day, all computer resources - the CPU, storage, networking --- will be connected to a fabric or grid and people will be billed for it on a usage basis," said van der Zweep. "You might have your own data center, but if you don't have enough resources, you could get them from next door or somewhere on the other side of the world. The more you use, the more you pay."

In a world like that, van der Zweep said, the units of measurement might be transactions. For example, if your SAP system runs out of gas during the holiday season, you can satisfy that thirst by buying some processing power from an SAP-empowered grid that bills you by the number of transactions it handles for you. Or perhaps the grid bills you by the number of e-mails sent and received.

According to van der Zweep, you wouldn't even need your own data center. "Get a box, put it in your data center, and plug it in," he said. "Or, use a box in our data center. Take all the resources normally found in a datacenter and turn it into a pool of resources that others can share. This will become the new outsourcing model."

Almost a year ago, in "MIPS becoming the next commodity", I envisioned a world where the abstracting layer of APIs would be processor-agnostic in the same way that the Java Virtual Machine is agnostic to the operating system. Any processor, or pool of processors, could service an on-demand request, and some processors would be able to deliver more capacity at lower prices than others.

In the HP scheme of things, IA-64 is the unifying architecture on which all HP-supported operating systems will one day run. This includes OpenVMS, HP-UX, NSK (Non-Stop), Windows, and Linux.

I asked van der Zweep if application-specific grids were the end all be all, or if perhaps, through something like Web services, the processor could be abstracted into a layer of network-based processor-on-demand APIs. That's when an interesting piece of HP's grander scheme rose to the surface of our discussion.

Van der Zweep pointed out that people still care about what operating system the applications run on because--with the exception of some applications that run on a Java Virtual Machine--most software is compiled to run on a specific operating system.

But, if all of these operating systems are running on IA-64, then something like a single SuperDome server could be partitioned physically or virtually into a system running separate instances of any one of those operating systems. In such a scenario, said van der Zweep, the utility concept can dive below the application layer, and start requesting capacity at the OS-level. For example, if your Oracle database is running on Linux and needs more juice, it can reach out to another Linux capacity provider and get it.

If this sounds like pie-in-the-sky stuff, HP certainly isn't seeing it that way. "Already," said van der Zweep, "we've demonstrated HP-UX, Linux, and Windows running in separate hardware partitions on a single SuperDome server. But soon, we'll have software partitions and we'll be able to split one cycle to this partition, and 10 cycles to that one."

Van der Zweep envisions a day when people get all their compute cycles from one super data center and get their storage from another. "Our vision includes intelligent provisioning, where an entire copy of a database can be migrated to the location that provides the cheapest batch processing at whatever time of day," said van der Zweep. "The database could be running in San Francisco while the storage is in London, and economics dictate where you outsource to at any given time." It sounds similar to the way voice services get billed at peak and non-peak hours.

There are hurdles, van der Zweep admits. "Network latency is an issue. But we're working on that too."

I've known for quite some time that IA-64 was the strategic platform to which all of HP was migrating. But it's only now that I'm starting to get a clearer picture of the company-wide roadmap. Whether HP gets there remains to be seen. But the vision seems sound.

Does the vision seem sound to you? Use TalkBack to let your fellow ZDNet readers know what you think. Or write to me at david.berlind@cnet.com. If you're looking for my commentaries on other IT topics, check the archives.