J D C T E C H T I P S
TIPS, TECHNIQUES, AND SAMPLE CODE
HERE You find the JDC Tech Tips ARCHIVES
March 6, 2001. This issue covers:
* Cloning Objects
* Using the Serializable Fields API
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/JDCTechTips/2001/tt0306.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CLONING OBJECTS
Suppose you have some objects that you're using in an
application. How can you make copies of the objects? The most
obvious approach is to simply assign one object to another, like
this:
obj2 = obj1;
But this approach actually does no copying of the objects;
instead the approach only copies the object references. In other
words, after you perform this operation, there is still only one
object, but now there is an additional reference to the object.
If this seemingly obvious approach doesn't work, how do you
actually clone an object? Why not try the method Object.clone?
This method is available to all of Object's subclasses. Here's an
attempt:
class A {
private int x;
public A(int i) {
x = i;
}
}
public class CloneDemo1 {
public static void main(String args[])
throws CloneNotSupportedException {
A obj1 = new A(37);
A obj2 = (A)obj1.clone();
}
}
This code triggers a compile error, because Object.clone is a
protected method.
So let's try again, with another approach:
class A {
private int x;
public A(int i) {
x = i;
}
public Object clone() {
try {
return super.clone();
}
catch (CloneNotSupportedException e) {
throw new InternalError(e.toString());
}
}
}
public class CloneDemo2 {
public static void main(String args[])
throws CloneNotSupportedException {
A obj1 = new A(37);
A obj2 = (A)obj1.clone();
}
}
In this approach, you define your own clone method, which extends
Object.clone. The CloneDemo2 program compiles, but gives a
CloneNotSupportedException when you try to run it.
There's still a piece missing. You have to specify that the class
containing the clone method implements the Cloneable interface,
like this:
class A implements Cloneable {
private int x;
public A(int i) {
x = i;
}
public Object clone() {
try {
return super.clone();
}
catch (CloneNotSupportedException e) {
throw new InternalError(e.toString());
}
}
public int getx() {
return x;
}
}
public class CloneDemo3 {
public static void main(String args[])
throws CloneNotSupportedException {
A obj1 = new A(37);
A obj2 = (A)obj1.clone();
System.out.println(obj2.getx());
}
}
Success! CloneDemo3 compiles and produces the expected result:
37
You've learned that you must explicitly specify the clone method,
and your class must implement the Cloneable interface. Cloneable
is an example of a "marker" interface. The interface itself
specifies nothing. However, Object.clone checks whether a class
implements it, and if not, throws a CloneNotSupportedException.
Object.clone does a simple cloning operation, copying all
fields from one object to a new object. In the CloneDemo3
example, A.clone calls Object.clone. Then Object.clone creates
a new A object and copies the fields of the existing object to
it.
There are a couple of other important points to consider about
the approach illustrated in the CloneDemo3 example. One is that
you can prevent a user of your class from cloning objects of
the class. To do this, you don't implement Cloneable for the
class, and the clone method always throws an exception. Most of
the time, however, it's better to explicitly plan for and
implement a clone method in your class so that objects are
copied appropriately.
Another point is that you can support cloning either
unconditionally or conditionally. The code in the CloneDemo3
example supports cloning unconditionally, and the clone method
does not propagate CloneNotSupportedException.
A more general approach is to conditionally support cloning for
a class. In this case, objects of the class itself can be cloned,
but objects of subclasses possibly cannot be cloned. For
conditional cloning, the clone method must declare that it can
propagate CloneNotSupportedException. Another example of
conditional support for cloning is where a class is a collection
class whose objects can be cloned only if the collection elements
can be cloned.
Yet another approach is to implement an appropriate clone method
in a class, but not implement Cloneable. In that case, subclasses
can support cloning if they wish.
Cloning can get tricky. For example, because Object.clone does
a simple object field copy, it sometimes might not be what you
want. Here's an example:
import java.util.*;
class A implements Cloneable {
public HashMap map;
public A() {
map = new HashMap();
map.put("key1", "value1");
map.put("key2", "value2");
}
public Object clone() {
try {
return super.clone();
}
catch (CloneNotSupportedException e) {
throw new InternalError(e.toString());
}
}
}
public class CloneDemo4 {
public static void main(String args[]) {
A obj1 = new A();
A obj2 = (A)obj1.clone();
obj1.map.remove("key1");
System.out.println(obj2.map.get("key1"));
}
}
You might expect CloneDemo4 to display the result:
value1
But instead it displays:
null
What's happening here? In CloneDemo4, A objects contain a HashMap
reference. When A objects are copied, the HashMap reference is
also copied. This means that an object clone contains the
original reference to the HashMap object. So when a key is
removed from the HashMap in the original object, the HashMap in
the copy is updated as well.
To fix this problem, you can make the clone method a bit more
sophisticated:
import java.util.*;
class A implements Cloneable {
public HashMap map;
public A() {
map = new HashMap();
map.put("key1", "value1");
map.put("key2", "value2");
}
public Object clone() {
try {
A aobj = (A)super.clone();
aobj.map = (HashMap)map.clone();
return aobj;
}
catch (CloneNotSupportedException e) {
throw new InternalError(e.toString());
}
}
}
public class CloneDemo5 {
public static void main(String args[]) {
A obj1 = new A();
A obj2 = (A)obj1.clone();
obj1.map.remove("key1");
System.out.println(obj2.map.get("key1"));
}
}
The Clone5Demo example displays the expected result:
value1
Clone5Demo calls super.clone to create the A object and copy the
map field. The example code then calls HashMap.clone to do a
special type of cloning peculiar to HashMaps. This operation
consists of creating a new hash table and copying entries to it
from the old one.
If two objects share a reference, as in CloneDemo4, then you're
likely to have problems unless the reference is read-only. To get
around the problems, you need to implement a clone method that
handles this situation. Another way of saying it is that
Object.clone does a "shallow" copy, that is, a simple
field-by-field copy. It doesn't do a "deep" copy, where each
object referred to by a field or array is itself recursively
copied.
It's extremely important to call super.clone, instead of, for
example, saying "new CloneDemo5" to create an object. You should
call super.clone at each level of the class hierarchy. That's
because each level might have its own problems with shared
objects. If you use "new" instead of super.clone, then your code
will be incorrect for any subclass that extends your class; the
code will call your clone method and receive an incorrect object
type in return.
One other thing to know about cloning is that it's possible to
clone any array, simply by calling the clone method:
public class CloneDemo6 {
public static void main(String args[]) {
int vec1[] = new int[]{1, 2, 3};
int vec2[] = (int[])vec1.clone();
System.out.println(vec2[0] + " " + vec2[1] +
" " + vec2[2]);
}
}
A final important point about cloning: it's a way to create and
initialize new objects, but it's not the same as calling a
constructor. One example of why this distinction matters
concerns blank finals, that is, uninitialized fields declared
"final", that can only be given a value in constructors. Here's
an example of blank final usage:
public class CloneDemo7 {
private int a;
private int b;
private final long c;
public CloneDemo7(int a, int b) {
this.a = a;
this.b = b;
this.c = System.currentTimeMillis();
}
public static void main(String args[]) {
CloneDemo7 obj = new CloneDemo7(37, 47);
}
}
In the CloneDemo7 constructor, a final field "c" is given
a timestamp value that is obtained from the system clock. What
if you want to copy an object of this type? Object.clone copies
all the fields, but you want the timestamp field to be set to the
current system clock value. However, if the field is final, you
can only set the field in a constructor, not in a clone method.
Here's a illustration of this issue:
public class CloneDemo8 {
private int a;
private int b;
private final long c;
public CloneDemo8(int a, int b) {
this.a = a;
this.b = b;
this.c = System.currentTimeMillis();
}
public CloneDemo8(CloneDemo8 obj) {
this.a = obj.a;
this.b = obj.b;
this.c = System.currentTimeMillis();
}
public Object clone() throws CloneNotSupportedException {
//this.c = System.currentTimeMillis();
return super.clone();
}
public static void main(String args[]) {
CloneDemo8 obj = new CloneDemo8(37, 47);
CloneDemo8 obj2 = new CloneDemo8(obj);
}
}
If you uncomment the line that attempts to set final field
"c", the program won't compile. So instead of using clone to set
the field, the program uses a copy constructor. A copy
constructor is a constructor for the class that takes an object
reference of the same class type, and implements the appropriate
copying logic.
You might think you could solve this problem by not using blank
finals, and instead simply use final fields that are immediately
initialized with the system time, like this:
class A implements Cloneable {
final long x = System.currentTimeMillis();
public Object clone() {
try {
return super.clone();
}
catch (CloneNotSupportedException e) {
throw new InternalError(e.toString());
}
}
}
public class CloneDemo9 {
public static void main(String args[]) {
A obj1 = new A();
// sleep 100 ms before doing clone,
// to ensure unique timestamp
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
System.err.println(e);
}
A obj2 = (A)obj1.clone();
System.out.println(obj1.x + " " + obj2.x);
}
}
This doesn't work either. When you run the program, you see that
obj1.x and obj2.x have the same value. This indicates that normal
object initialization is not done when an object is cloned, and
you can't set the value of final fields within the clone method.
So if simple copying is not appropriate for initializing a field,
you can't declare it final. Or else you need to use copy
constructors as a cloning alternative.
For more information about object cloning, see Section 3.9,
Cloning Objects, and Section 2.5.1, Constructors, in "The Java
Programming Language Third Edition" by Arnold, Gosling, and
Holmes
(http://java.sun.com/docs/books/javaprog/thirdedition/).
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING THE SERIALIZABLE FIELDS API
Serialization is a mechanism in which arbitrary objects can be
converted into a byte stream, to be saved to disk or transmitted
across a network. The byte stream can later be deserialized to
reconstitute the object. You make the default serialization
mechanism available simply by declaring that your class
implements the java.io.Serializable marker interface.
This tip presents a two-part example that uses what is called
the Serializable Fields API. If you've worked with serialization,
you know that it's possible to override the default mechanism for
a given class. One way you can do this is by defining your own
readObject and writeObject methods. The Serializable Fields API
ties in to this mechanism.
Imagine a simple application where you would like to keep a
cumulative total of the number of weeks that have elapsed from
some starting point. You save this information in object form to
a file, and periodically add to the total. Here's a program that
does this:
import java.io.*;
class ElapsedTime implements Serializable {
static final long serialVersionUID = 892420644258946182L;
private double numweeks;
// read a serialized object
private void readObject(ObjectInputStream in)
throws IOException, ClassNotFoundException {
ObjectInputStream.GetField fields = in.readFields();
numweeks = fields.get("numweeks", 0.0);
}
// write a serialized object
private void writeObject(ObjectOutputStream out)
throws IOException {
ObjectOutputStream.PutField fields = out.putFields();
fields.put("numweeks", numweeks);
out.writeFields();
}
// constructor
public ElapsedTime() {
numweeks = 0.0;
}
// add to the elapsed time
public void addTime(double t) {
numweeks += t;
}
// return the elapsed time
public double getTime() {
return numweeks;
}
}
public class FieldDemo1 {
private static final String DATAFILE = "data.ser";
public static void main(String args[])
throws IOException, ClassNotFoundException {
if (args.length != 1) {
System.err.println("missing command line argument");
System.exit(1);
}
ElapsedTime et = null;
// read serialized object if data file exists,
// else create a new object
if (new File(DATAFILE).exists()) {
FileInputStream fis = new FileInputStream(DATAFILE);
ObjectInputStream ois = new ObjectInputStream(fis);
et = (ElapsedTime)ois.readObject();
fis.close();
}
else {
et = new ElapsedTime();
}
// update the elapsed time
et.addTime(Double.parseDouble(args[0]));
// write the serialized object
FileOutputStream fos = new FileOutputStream(DATAFILE);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(et);
oos.close();
System.out.println("Elapsed time is " +
et.getTime() + " weeks");
}
}
The FieldDemo1 program reads a serialized object of class
ElapsedTime from the data file (data.ser). If the file doesn't
exist, the program creates a default object. Then the program
updates the cumulative time, and writes the object back to the
file. You can use the program like this:
$ javac FieldDemo1.java
$ java FieldDemo1 10
$ java FieldDemo1 20
This sequence adds two values (10 and 20) to the cumulative
total of elapsed weeks.
The program's use of serialization is straightforward,
except for two things. The first is the serialVersionUID field
(this will be discussed further a little later in the tip). The
second thing is the use of the Serializable Fields API.
The FieldDemo1 program defines readObject and writeObject
methods. These methods interact with serialization by
specifying field names on which to operate. For example, one of
the calls is:
numweeks = fields.get("numweeks", 0.0);
This interface is quite different than the usual mechanism,
where the fields of the object are set for you. Using a custom
readObject method and the Serializable Fields API provide an
alternative so that you can specify fields by name. The fields
can be accessed in any order. Also, the default value specified
to the get method is used if the input stream does not contain
an explicit value for the field.
The interface used in writeObject is similar. The program uses
putFields to set up a buffer, and then it can put serializable
field values into the buffer in any order. The program uses
writeFields to write the buffer to the stream. Any fields that
have not been set are given default values (0, false, null)
appropriate to the field type.
Note that the use of the Serializable Fields API in the above
example is not required. It offers a different type of interface
that helps to make clear exactly what is going on with the
setting of the various fields. But there are tradeoffs here, for
example, in performance. Using this API when you don't need to,
might not always be a good idea.
Suppose that you've been using the above approach for a while
in your application, and then you decide to represent elapsed
time as a number of days instead of weeks. It's pretty easy to
change the ElapsedTime class to do this. But what happens to all
the ElapsedTime objects that have been serialized and are sitting
around in databases and files?
Suppose too, for the moment, that your original version of
ElapsedTime did not declare the static "serialVersionUID" field.
If you change ElapsedTime, then when you try to deserialize old
objects of this class, you get an InvalidClassException. This is
because serialization uses what is known as a "serial version
UID" to detect compatible versions of a given class. The serial
version UID is a 64-bit number whose default value (if not
explicitly declared) is a hash of the class name, interface
names, member signatures, and miscellaneous class attributes.
The serial version UID for a class is written when instances of
the class are serialized. If the class changes, the default
serial version UID value changes as well. When ObjectInputStream
deserializes objects, it checks to make sure that serial version
UIDs contained in the incoming stream match those of the classes
loaded in the receiving JVM*. If a mismatch occurs, then an
InvalidClassException is thrown.
If you change a class, but you don't want the serialization
mechanism to complain, you need to declare your own
serialVersionUID field in the class. If the earlier version of
the class did not already declare a serial version UID value,
then you can determine its implicit (default) serial version UID
value by using the serialvertool, which is included in the
standard JDK distribution:
$ javac FieldDemo1.java
$ serialver -classpath . ElapsedTime
If the previous version of the class already explicitly declared
a serial version UID value, then you can simply leave the
declaration in place with the same value.
As a general rule, it's always a good idea to declare explicit
serial version UID values for serializable classes. There are two
reasons for doing that:
o Runtime computation of the serial version UID hash is expensive
in terms of performance. If you don't declare a serial version
UID value, then serialization must do it for you at runtime.
o The default serial version UID computation algorithm is
extremely sensitive to class changes, including those that
might result from legal differences in compiler implementations.
For instance, a given serializable nested class might have
different default serial version UID values depending on which
javac was used to compile it. That's because javac must add
synthetic class members to implement nested classes, and
differences in these synthetic class members are picked up by
the serial version UID calculation.
Let's look at the second demo program, and then finish the
explanation:
import java.io.*;
class ElapsedTime implements Serializable {
static final long serialVersionUID = 892420644258946182L;
// list of fields to be serialized; this list
// need not match the actual fields in ElapsedTime
private static final ObjectStreamField
serialPersistentFields[] = {
new ObjectStreamField("numweeks", Double.TYPE)
};
// transient (unserialized) field for this object
private transient double numdays;
// read a serialized object
private void readObject(ObjectInputStream in)
throws IOException, ClassNotFoundException {
ObjectInputStream.GetField fields = in.readFields();
numdays = fields.get("numweeks", 0.0) * 7.0;
}
// write a serialized object
private void writeObject(ObjectOutputStream out)
throws IOException {
ObjectOutputStream.PutField fields = out.putFields();
fields.put("numweeks", numdays / 7.0);
out.writeFields();
}
// constructor
public ElapsedTime() {
numdays = 0.0;
}
// add to the elapsed time
public void addTime(double t) {
numdays += t;
}
// get the elapsed time
public double getTime() {
return numdays;
}
}
public class FieldDemo2 {
private static final String DATAFILE = "data.ser";
public static void main(String args[])
throws IOException, ClassNotFoundException {
if (args.length != 1) {
System.err.println("missing command line argument");
System.exit(1);
}
ElapsedTime et = null;
// read serialized object if it exists, else
// create a new object
if (new File(DATAFILE).exists()) {
FileInputStream fis = new FileInputStream(DATAFILE);
ObjectInputStream ois = new ObjectInputStream(fis);
et = (ElapsedTime)ois.readObject();
fis.close();
}
else {
et = new ElapsedTime();
}
// add to the elapsed time
et.addTime(Double.parseDouble(args[0]));
// write the serialized object
FileOutputStream fos = new FileOutputStream(DATAFILE);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(et);
oos.close();
System.out.println("Elapsed time is " +
et.getTime() + " days");
}
}
Both programs have the serialVersionUID field set to the same
value, so serialization will consider the two ElapsedTime class
versions to be compatible with each other.
But there's a problem here. If elapsed time was represented as
a number of weeks, but now as a number of days, then the field
numweeks in the serialized objects would simply be wrong. You
could change it to days, but that makes all the existing objects
invalid.
The solution to this problem is to keep the original
representation in the serialized objects, but use the
Serializable Field API to translate between days and weeks. That
is, continue to read and write objects containing numweeks, but
then multiply or divide by 7.0 to give the number of days.
Notice that numdays is the field used in the new version of the
ElapsedTime class. But this is not the same as the field in
actual serialized objects, which is numweeks. If you try to put
fields by name, using the Serializable Fields API, and the name
doesn't actually represent a serializable field in the class, you
get an exception. So you need to go a step further, by adding the
following declaration to the ElapsedTime class:
private static final ObjectStreamField
serialPersistentFields[] = {
new ObjectStreamField("numweeks", Double.TYPE)
};
This is a special field that the serialization mechanism knows
about. By setting this field, you can override the default set of
serializable fields. The fields in this list do not have to be
part of the current class definition for the class that you're
trying to serialize. The ObjectStreamField constructor takes
a field name and a field type, with the type represented using
java.lang.Class.
So what you've done is serialize numweeks, and declared numdays
as transient, that is, not subject to serialization. Using this
approach, it is possible for both old and new versions of the
ElapsedTime class to read and write serialized objects. In other
words, you can read existing objects using the new class version,
write out updated objects, and then use the old class version to
read and write them as well.
Here's an example sequence that illustrates this idea (before you
try this sequence, be sure to remove any copy of data.ser that
you have around):
javac FieldDemo1.java
java FieldDemo1 10
javac FieldDemo2.java
java FieldDemo2 14
javac FieldDemo1.java
java FieldDemo1 5
javac FieldDemo2.java
java FieldDemo2 28
When the sequence is executed, the result is:
Elapsed time is 10.0 weeks
Elapsed time is 84.0 days
Elapsed time is 17.0 weeks
Elapsed time is 147.0 days
This output demonstrates that both object views (weeks and days)
are represented. For example, the first program adds 10 weeks to
the total, or 70 days. Then the second program is called to add 14
days, and the total is now 84 days.
For more information about using the Serializable Fields API, see
the tutorial "Using Serialization and the Serializable Fields API"
(http://java.sun.com/j2se/1.3/docs/guide/serialization/
examples/altimpl/index3.html), and the Java Object
Serialization Specification
(http://java.sun.com/j2se/1.3/docs/guide/serialization/
spec/serialTOC.doc.html).
. . . . . . . . . . . . . . . . . . . . . . .
February 27, 2001. This issue of the JDC Tech Tips covers
two topics about Java(tm) Remote Method Invocation (RMI). The
topics covered are:
* The Lifecycle of an RMI Server
* Dynamic Class Loading in RMI
This tip was developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/JDCTechTips/2001/tt0227.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
THE LIFECYCLE OF AN RMI SERVER
RMI allows you to invoke methods on objects in other Java(tm)
virtual machines*, often across the network. Applications that
use RMI to invoke methods in these remote objects are typically
composed of two separate programs: an RMI client that makes
requests, and an RMI server that executes the requests and
returns results to the client. This tip examines the steps in
the operation of an RMI server, that is, it's lifecycle.
All RMI servers implement a remote interface, that is an
interface that extends the interface java.rmi.Remote.
A remote interface is the protocol that is used to communicate
a request to an RMI server and to return results. So let's
create a simple remote interface:
import java.rmi.*;
public interface Echo extends Remote {
public String echo(String value) throws RemoteException;
}
Because echo is a method in a remote interface it must be
declared to throw a RemoteException.
An object that implements a remote interface becomes a remote
object, and its methods can be invoked remotely (that is, in
other VMs) through RMI. Let's write a simple RMI server that
implements the remote interface:
import java.io.*;
import java.rmi.*;
import java.rmi.registry.*;
import java.rmi.server.*;
public class EchoServer implements Echo {
public String echo(String value) {
return value;
}
public static void main(String[] args) {
try {
//create an object
EchoServer serv = new EchoServer();
//export the object
Echo remoteObj =
(Echo) UnicastRemoteObject.exportObject(serv);
//make object findable
Registry r = LocateRegistry.getRegistry(
"localhost", 1099);
r.rebind("ECHO", remoteObj);
BufferedReader rdr = new BufferedReader(
new InputStreamReader(System.in));
//Serve clients
while (true) {
System.out.println("Type EXIT to shutdown the server.");
if ("EXIT".equals(rdr.readLine())) {
break;
}
}
//unregister object
r.unbind("ECHO");
//unexport object
UnicastRemoteObject.unexportObject(serv, true);
}
catch (Exception e) {
e.printStackTrace();
}
}
}
The EchoServer class extends RemoteObject, which provides
implementations of hashCode, equals, and toString for remote
objects. However, EchoServer is deliberately designed not to
extend UnicastRemoteObject. The UnicastRemoteObject class is
a subclass of RemoteObject that hides most of the details of
making an object available to remote clients. For the purpose of
demonstration, EchoServer instead calls RMI APIs that show each
step of a server's lifecycle, as follows:
1. Create an object:
EchoServer serv = new EchoServer();
2. Export an object (declare intent to remote):
Echo remoteObj =
(Echo) UnicastRemoteObject.exportObject(serv);
3. Make the object findable (in this case by binding into
the registry):
Registry r = LocateRegistry.getRegistry(
"localhost", 1099);
4. Serve clients:
while (true) {
System.out.println("Type EXIT to shutdown the server.");
if ("EXIT".equals(rdr.readLine())) {
break;
}
}
5. Unregister the object if registered:
r.unbind("ECHO");
6. Unexport the object:
UnicastRemoteObject.unexportObject(serv, true);
A seventh step in an RMI server's lifecycle (though not
demonstrated in the EchoServer code) is the garbage
collector collecting the object if the object becomes
unreferenced.
Step 1 is common to any Java object. Step 4 is common to any
Java object that needs to serve a client request. Also garbage
collection is common to Java objects that become unreferenced.
But the other steps are specific to remote objects.
In Step 2, you explicitly export an object. This tells the RMI
runtime that you want this object to be available to remote VMs.
Exporting an object also returns the stub for the object. The
stub is a class that does the work of formatting and
transmitting method arguments to the RMI server and returning
results to the RMI client. Later in this tip, you'll see how to
create the stub. Note that many RMI classes skip this step of
explicitly exporting an object. Instead these classes subclass
UnicastRemoteObject, which automatically exports the object in
its constructor.
In Step 3, you make an object findable by code running in other
VMs. The simplest way to do this is to register the stub with the
RMI registry. The registry a simple name-object lookup service
that listens on port 1099, by default. Other ways to make an
object findable are to simply pass the object to a remote method,
or return it in a remote method implementation. In these
situations, the RMI runtime automatically replaces the object
with its stub. Of course, from the RMI client's perspective,
this leads to an interesting question: Where did the remote
object come from? Which leads to the interesting answer: It came
from another remote object! Sooner or later, there must be one or
more "original" objects that are obtained without the help of
other RMI objects. The RMI registry provides this bootstrap
service.
In Step 4, you decide how long the object will be available to
service clients. In the simple example above, the server process
presents a console prompt that tells the user how to shut down
the server ("Type EXIT to shutdown the server."). In real-world
deployments, you need to use application-specific logic to decide
when to stop exporting your server.
Steps 5 and 6 simply reverse steps 3 and 2.
Now let's test this simple RMI server. To do that, you need to
performs the following actions:
o Create the stub class.
o Create an RMI client that connects to the RMI server.
o Start the RMI registry.
o Run the RMI server.
o Run the RMI client.
Create the stub class with the rmic command-line tool:
rmic EchoServer
If you look in your class file directory, you should see the
EchoServer_stub.class file.
Use the following code for the RMI client that connects to and
uses EchoServer:
import java.rmi.*;
public class EchoClient {
public static void main(String [] args) {
try {
System.out.println("Connecting to echo server...");
Echo e = (Echo) Naming.lookup("ECHO");
String result = e.echo("Hello");
System.out.println("Echo returned " + result);
}
catch (Exception e) {
e.printStackTrace();
}
}
}
You will need three console windows to start the RMI registry,
run the RMI server, and run the RMI client. In all of these
windows, make sure your class path is set to the directory
where the various Echo classes live. Then, run the following
three commands, one per window, in this order:
rmiregistry
java EchoServer
java EchoClient
The client should successfully connect to the server and return
an echo. You should see the following displayed:
Connecting to echo server...
Echo returned Hello
If it does not, read on. The tip "Dynamic Class Download in
RMI" shows some techniques for debugging RMI problems.
For more advanced ideas on the RMI server lifecycle, read the
Remote Object Activation tutorial at
http://java.sun.com/j2se/1.3/docs/guide/rmi/activation.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DYNAMIC CLASS DOWNLOAD IN RMI
The example in the tip "The Lifecycle of an RMI Server" is a bit
unrealistic because the client and server code reside in the
same directory or folder. In a typical deployment, the RMI
client, EchoClient, would be on one machine, and the RMI server,
EchoServer, would be on another. What about the Echo interface?
In most systems, the interface is needed on both client and
server. However, this is not typically a deployment problem
because interfaces rarely change. Besides, the EchoClient could
not do much with an EchoServer without knowing some interface
through which to make the call.
The stub presents more of a deployment problem. In JDK 1.3,
RMI requires that clients use a stub class to connect to a
remote VM. The stub class hides the details of network
communication, sending method arguments and waiting for a return
value. Today, you generate stub classes with the rmic command
line tool. However future versions of RMI might allow clients to
build stubs dynamically at runtime using dynamic proxies. Until
that feature is added, clients need a way to guarantee that
stubs are available. Installing the stubs on the client is not
always viable, because stubs are an implementation detail that
might change over time. Dynamic class loading allows client
virtual machines to find stubs at runtime, without any special
coding in the client.
To see the problem, test the EchoClient and EchoServer, but
unlike the previous tip, run the RMI registry, EchoClient,
and EchoServer in separate class paths. This would be more
typical of a real deployment situation.
First, start the RMI registry from a console that does not have
any of the Echo classes on the class path. Pass in an additional
argument to debug class loading problems, as shown below:
rmiregistry -J-Dsun.rmi.loader.logLevel=VERBOSE
Now move all the server code (EchoServer, EchoServer_stub, and
Echo) into a server subdirectory or subfolder. Then try to start
EchoServer from its parent directory or folder, as follows:
java -cp server EchoServer
You should see an exception:
java.rmi.ServerException: RemoteException occurred in server ...
java.rmi.UnmarshalException: error unmarshalling ...
java.lang.ClassNotFoundException: EchoServer_Stub
...
This exception, received by the server, reports an error that
actually fist occurred in the rmiregistry process. When the
server attempts to bind the name into the registry, the registry
must be able to load the stub class. The debug flag on the
command line causes the registry to log this process. So in the
rmiregistry console, you see something like this:
... loading class "EchoSever_Stub" from []
The [] indicates that the registry does not know where to look
for the stub. To fix this problem, the server needs to tell
clients where to find any needed classes. To do this, you can
specify a codebase on the command line to the server process:
java -cp server \
-Djava.rmi.server.codebase=file:{serverloc}/ EchoServer
Replace the codebase value (that is file:{serverloc}) with the
URL to your server, and do not omit the trailing '/' and space.
For example, if the full URL to your server is
file:/myhome/server, enter the following command:
java -cp server \
-Djava.rmi.server.codebase=file:/myhome/server/ EchoServer
When you specify a codebase, the server annotates all outbound
objects with URL location information. Clients can use this
location information to download classes when necessary. If you
try this command, the server should function normally, that is,
you should see the following prompt on the console line:
Type EXIT to shutdown the server.
In the rmiregistry output you should also see a line proving
that the registry loaded your code from the codebase:
... loading ... from [file:/yourURL/]
Now copy the EchoClient class and the Echo class to another
subdirectory or subfolder called client. Then run EchoClient
from its parent directory or folder, passing the same
debugging flag that you did to the rmiregistry:
java -cp client -Dsun.rmi.loader.logLevel=VERBOSE EchoClient
You will get an exception telling you that the RMI class loader
is disabled.
To fix this problem, you need to use dynamic class loading.
However, to use dynamic class loading, you must install a
security manager. Without a security manager installed, servers
could easily attack you by sending malicious code that
masquerades as a remote stub. When you turn on security, you also
need a policy file that gives you the permissions necessary to do
RMI work. So save the following policy file as SimpleRMI.policy
in the client folder:
grant {
permission java.net.SocketPermission
"*:1024-", "accept, connect";
permission java.io.FilePermission
"${/}thisproject${/}server${/}-", "read";
};
The SocketPermission lets RMI use sockets on nonprivileged ports,
and the FilePermission allows classes to be dynamically
downloaded from file URLs. Make sure you change "thisproject" to
the location you are using, for example:
grant {
permission java.net.SocketPermission
"*:1024-", "accept, connect";
permission java.io.FilePermission
"${/}myhome${/}server${/}-", "read";
};
Notice the use of ${/} instead of / or \. The policy expands
${/} to the correct path or folder delimiter on your host
platform.
Now try the client with security installed, and with class load
tracing enabled:
java -cp client -Dsun.rmi.loader.logLevel=VERBOSE \
-Djava.security.policy=client/SimpleRMI.policy \
-Djava.security.manager EchoClient
This time the client works as expected. The log output shows
that the client successfully downloaded the stub from the
server folder.
... loading class "EchoServer_Stub" from [file:/yourpath/]
Echo returned Hello
This is very powerful. After the class is annotated with the
codebase, the codebase travels with the class without any extra
effort. You could pass EchoServer from machine to machine
forever, and each machine would know where to find the stub --
in fact, the machines would know where to find any class they
needed.
But there's something to watch out for, something that you
can observe by doing the following: Making sure that all
console processes are shut down, restart the rmiregistry.
But this time restart it with the server classes on its
class path. (The easiest way to do this is to make sure that
the class path is not set and then run rmiregistry from the
server directory or folder.) Start the server as before, passing
in the codebase annotation. Now if you try to run the client
from the client folder, class loading fails:
java -cp client -Dsun.rmi.loader.logLevel=VERBOSE \
-Djava.security.policy=client/SimpleRMI.policy \
-Djava.security.manager EchoClient
... loading class "EchoServer_Stub" from []
java.rmi.UnrmarshalException: ...
...
This can be the most baffling problem that a novice RMI
programmer faces. Everything looks normal. The codebase is set
correctly in the server, and it appears that rmiregistry loaded
the stub from the correct codebase. But somehow the codebase
annotation gets lost, and the client cannot find the stub.
In order for the stub's annotation to flow from one process to
another, intermediate processes must either (1) implicitly load
the annotated class using an RMI-created class loader or
(2) explicitly reset the codebase. If any intermediate VM finds
the stub on its class path, the normal system class loader is
used, and the annotation passed from the server is lost. The
series of events leading to this failure is:
1. The server explicitly sets codebase annotation from the
command line.
2. The server binds the stub, which picks up the annotation.
3. The registry process attempts to load the stub, and finds
it on the class path.
4. Because the stub is on the class path, the annotation is
lost.
5. The client attempts to look up the stub, but has no
codebase to find it.
You could work around this problem by resetting the codebase on
every intermediate virtual machine, or even on the end client.
This is rarely appropriate. The server is the logical owner of
the stub, and should tell clients where to find it. In order to
leave the server in charge, set the codebase on the server, and
make sure that client processes never place dynamically-loaded
classes on their class paths.
This tip demonstrates that RMI problems are much easier to
troubleshoot if you use the correct debugging flags. In this
example, the sun.rmi.loader.logLevel flag makes it easy to
determine where in the system the annotations are being lost.
Another useful flag is java.rmi.server.logCalls=TRUE. This flag
logs all remote calls. For a more complete list of RMI debugging
flags, see
http://java.sun.com/j2se/1.3/docs/guide/rmi/faq.html#properties.
Note that the properties that begin with "rmi" are part of the
public specification. Properties that begin with "sun" are
subject to change or removal in future versions of the
implementation.
. . . . . . . . . . . . . . . . . . . . . . .
February 8, 2001. This issue covers:
* Piped Streams
* Using Sets
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/JDCTechTips/2001/tt0208.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PIPED STREAMS
Piped streams are a mechanism in the Java(tm) I/O library to set up
a stream of data between two threads. Pipes are input/output
pairs. Data written on the output stream shows up on the input
stream at the other end of the pipe. You can think of pipes as
a buffer, with a connection at each end.
Before getting into the details of pipes, here is a simple
example of how you can use them:
import java.io.*;
class MyThread extends Thread {
private Writer out;
public MyThread(Writer out) {
this.out = out;
}
public void run() {
// write into one end of pipe
try {
BufferedWriter bw = new BufferedWriter(out);
bw.write("testing");
bw.newLine();
bw.close();
}
catch (IOException e) {
System.err.println(e);
}
}
}
public class PipeDemo1 {
public static void main(String args[]) throws IOException {
// create a pipe
PipedWriter writer = new PipedWriter();
PipedReader reader = new PipedReader(writer);
// start thread going and write into pipe
new MyThread(writer).start();
// read from other end of pipe
BufferedReader br = new BufferedReader(reader);
String str = br.readLine();
br.close();
System.out.println(str);
}
}
PipeDemo1 creates a pipe, consisting of a PipedWriter object, and
a PipedReader on the other end of the pipe. The PipedReader reads
data written by the PipedWriter.
PipeDemo1 then creates and starts an instance of MyThread. The
MyThread thread writes into one end of the pipe, then the main
thread reads the data that was written, that is:
testing
When piped streams are used to communicate between threads, the
action of the threads is coordinated. For example, if a thread
tries to read from a pipe, and no input is available, the thread
blocks, that is, it waits until input is available, preventing
other activity on the thread. The thread also blocks if it writes
into a pipe, and the pipe buffer fills up.
If both the pipe's reader and writer are the same thread, and the
pipe fills up, the thread blocks permanently. In other words, if
a thread writes into a pipe, and the pipe's buffer is full, the
thread blocks until the reader on the other end can take some of
the data from the buffer. But if the reader and writer are the
same thread, the thread is blocked, and so the reader cannot do
its job. Here's an example of a program that hangs because of
a blocked thread:
import java.io.*;
public class PipeDemo2 {
public static void main(String args[]) throws IOException {
// create a pipe
PipedWriter writer = new PipedWriter();
PipedReader reader = new PipedReader(writer);
// write into one end of pipe, writing
// more than the internal buffer size
BufferedWriter bw = new BufferedWriter(writer);
for (int i = 1; i <= 200; i++) {
bw.write("testing");
bw.newLine();
}
bw.close();
// read from the pipe, from within the
// same thread as the writer above
BufferedReader br = new BufferedReader(reader);
String str = br.readLine();
br.close();
System.out.println(str);
}
}
The internal buffer size is 1024, but you can't rely on this
number in writing your programs. PipeDemo2 overflows this buffer,
and because of this, the writer thread blocks. The program hangs
because the writer and reader threads are the same, and there is
no way for the reader to reduce the size of the buffer contents.
How can you use piped streams in practice? One example is an
application that has a producer thread that generates data, and
another thread that consumes the data. You could use a piped
stream to communicate between the threads. Here's what this looks
like:
import java.util.*;
import java.io.*;
class ProdThread extends Thread {
private static final int N = 100;
private static final int MAXLEN = 63;
private Writer out;
public ProdThread(Writer out) {
this.out = out;
}
public void run() {
Random rn = new Random(0);
char num[] = new char[MAXLEN];
try {
// write N lines of binary numbers,
// each binary number 1 - MAXLEN digits in length
BufferedWriter bw = new BufferedWriter(out);
for (int i = 1; i <= N; i++) {
int len = rn.nextInt(MAXLEN) + 1;
for (int j = 0; j < len; j++) {
num[j] = (rn.nextBoolean() ? '1' : '0');
}
bw.write(num, 0, len);
bw.newLine();
}
bw.close();
}
catch (IOException e) {
System.err.println(e);
}
}
}
class ConsThread extends Thread {
private Reader in;
public ConsThread(Reader r) {
in = r;
}
public void run() {
// read the generated test data that
// was written into the pipe above
try {
BufferedReader br = new BufferedReader(in);
String str;
while ((str = br.readLine()) != null) {
System.out.println(str);
}
br.close();
}
catch (IOException e) {
System.err.println(e);
}
}
}
public class PipeDemo3 {
public static void main(String args[]) throws IOException {
// create a pipe
PipedWriter writer = new PipedWriter();
PipedReader reader = new PipedReader(writer);
// create and start the generator and consumer threads
Thread prodthread = new ProdThread(writer);
Thread consthread = new ConsThread(reader);
prodthread.start();
consthread.start();
}
}
In this demonstration, the producer thread, ProdThread,
generates random binary numbers to be used as test data for the
consumer thread, ConsThread. The numbers are written into a pipe.
Then ConsThread reads the values from the other end of the pipe.
The first few of lines of output look like this:
1011010110001111010000001001011001
001111011111101011110100101110000001010
0111111
There are other ways to produce the same results. For example,
you could have the producer thread fill up a collection object
such as an ArrayList, and pass it to the consumer. However, if
the data is very large, this could chew up a lot of memory.
Also this approach doesn't allow the threads to run
simultaneously, something that might be important if you have
multiple CPUs available. You could use temporary files, but
this technique has many of the same problems.
Or you could use a limited size data structure, and coordinate
the action of the threads so that when one thread fills the
structure, it blocks. Then the other thread empties the
structure. But if you're going to do it this way, you might as
well use piped streams, and let them take care of the details
for you.
For more information, see Section 15.4.4, Piped Streams, in
"The Java Programming Language Third Edition" by Arnold,
Gosling, and Holmes
(http://java.sun.com/docs/books/javaprog/thirdedition/).
USING SETS
Set and SortedSet are Java collection interfaces. You use them to
specify a collection that has no duplicate elements. These
interfaces are implemented using the HashSet and TreeSet classes.
Here's a simple example:
import java.util.*;
public class SetDemo1 {
public static void main(String args[]) {
boolean b;
// create a set
Set s = new HashSet();
// add strings the first time
b = s.add("string1");
System.out.println("string1 add returns " + b);
b = s.add("string2");
System.out.println("string2 add returns " + b);
b = s.add("string3");
System.out.println("string3 add returns " + b);
// try to add some duplicate strings
b = s.add("string1");
System.out.println("string1 add returns " + b);
b = s.add("string2");
System.out.println("string2 add returns " + b);
// dump out set contents
Iterator i = s.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
This example creates a set and then adds some strings to it. The
add method returns true. So the output for the initial adds is:
string1 add returns true
string2 add returns true
Sets do not allow duplicate elements, so when an attempt is made
to add duplicates, the add method returns false. The output
for these attempts is:
string1 add returns false
string2 add returns false
After the elements are added, the contents of the set are
displayed using an iterator. The output is:
string1
string3
string2
This output illustrates an important fact about sets: by default,
the elements retrieved by the iterator are not in sorted order.
If you want to change that, you can use a SortedSet:
import java.util.*;
public class SetDemo2 {
public static void main(String args[]) {
// create a SortedSet implemented as a tree structure
SortedSet s = new TreeSet();
s.add("string1");
s.add("string2");
s.add("string3");
s.add("string1");
s.add("string2");
Iterator i = s.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
Using the SortedSet in SetDemo2 returns the elements in natural
order:
string1
string2
string3
You can also specify a Comparator object to order the set's
elements.
How do sets differ from lists and maps? Sets are different from
lists in that they have no duplicate elements. Unlike lists,
there is no way to manipulate set elements based on their
position in the set. For example, you can't retrieve the 17th
element of a set by random access, or insert an element before
the 59th element of the set. Sets are are different from maps in
that a map contains pairs of elements (that is, key/value pairs),
with each key mapped to a value.
An important concept relating to sets is defining what
constitutes a duplicate element. Consider this example:
import java.util.*;
public class SetDemo3 {
public static void main(String args[]) {
// create a set and add two objects
// to it; the objects are distinct
// but have the same displayable string
Set s = new HashSet();
s.add("37");
s.add(new Integer(37));
Iterator i = s.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
In this example, the two elements have the same printable string
(37), but are distinct when compared using the equals method.
This example leads to an important question. If you use a
SortedSet, where the elements must be ordered, what happens if you
add elements that have nothing in common with each other, that is,
the elements have object types that cannot be compared? Here's an
example that tries to do that:
import java.util.*;
public class SetDemo4 {
public static void main(String args[]) {
// create a SortedSet
SortedSet s = new TreeSet();
// add two objects to the set;
// the objects are not comparable
s.add("37");
s.add(new Integer(37));
Iterator i = s.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
SetDemo4 adds a string "37" and an Integer object "37" to a set.
When the program runs, the result is a ClassCastException. The
exception is thrown because an attempt is made to order the
elements of the set. A String object has no relationship to
an Integer object, so the relative order of the two objects
cannot be determined.
Because of the ordering implied by SortedSets, there are some
additional operations you can do with this type of set. Here's
a program that illustrates these operations:
import java.util.*;
public class SetDemo5 {
public static void main(String args[]) {
// create a SortedSet
SortedSet s = new TreeSet();
// add some elements to it
s.add("string1");
s.add("STRING2");
s.add("STRING3");
s.add("string4");
s.add("STRING5");
// get the first/lowest object in the set
System.out.println("first = " + s.first());
// get a subset of the set and display it
Set sub = s.subSet("A", "ZZZZZZ");
Iterator i = sub.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
This example creates a set, adds some elements to it, and
retrieves the first element of the set. Then it retrieves and
displays a subset. The subset consists of elements greater than
or equal to a minimum element, and less than a maximum element.
Here is the output:
first = STRING2
STRING2
STRING3
STRING5
A subset is an example of a "backing view," that is, it's not
a copy of elements in the original set, but an actual view of
the set that filters out some elements. This means that if the
original set changes, so will the subset. Here's a program that
illustrates this point:
import java.util.*;
public class SetDemo6 {
public static void main(String args[]) {
// create a sorted set
SortedSet s = new TreeSet();
s.add("string1");
s.add("STRING2");
s.add("STRING3");
s.add("string4");
s.add("STRING5");
// get a subset of the set and display it
Set sub = s.subSet("A", "ZZZZZZ");
Iterator i = sub.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
System.out.println();
// remove an element from the original set
s.remove("STRING3");
// display the subset again
i = sub.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
Here is the displayed output of the SetDemo6 program:
STRING2
STRING3
STRING5
STRING2
STRING5
Notice that the second time the subset is displayed, the
"STRING3" element is gone. That's because it was removed from
the original set that backs the subset.
A final point about sets is illustrated by the following
example of bad programming style:
import java.util.*;
class MyObject {
public String s;
public String toString() {
return s;
}
public int hashCode() {
return s.hashCode();
}
public boolean equals(Object o) {
if (!(o instanceof MyObject)) {
return false;
}
MyObject mo = (MyObject)o;
boolean b = s.equals(mo.s);
return b;
}
}
public class SetDemo7 {
public static void main(String args[]) {
Set s = new HashSet();
// create two MyObjects and
// add them to a set
MyObject obj1 = new MyObject();
MyObject obj2 = new MyObject();
obj1.s = "string1";
obj2.s = "string2";
s.add(obj1);
s.add(obj2);
// change one of the object's contents
obj2.s = "string1";
// remove both objects from the set
s.remove(obj1);
s.remove(obj2);
// dump out the contents of the set
Iterator i = s.iterator();
while (i.hasNext()) {
System.out.println(i.next());
}
}
}
A set cannot have duplicate elements. This example shows what can
happen if you violate this rule by changing an element's contents
after the fact. The example adds two MyObject objects to a set.
The objects are distinct from each other. Then one of the objects
is changed so that it is no longer distinct. Then both objects
are removed from the set, and the contents of the set are
displayed. The set should be empty, but it's not, as you can see
from the displayed result:
string1
To understand why the set is not empty, consider some internal
details of set implementation. A Set is implemented using a
HashMap, with the set element as the key, and a dummy object as
the value to be inserted in the map.
If you insert two elements into a map, they are inserted at
random points in the map, based on their hash codes. If you
change the key for one of the elements, without updating the map,
the element in the map is "stranded." This means that the element
is probably inaccessible except through a sequential scan of the
map. That's why you see the anomalous behavior illustrated by the
SetDemo7 program.
For more information about sets, see Section 16.5, Set and
SortedSet, in "The Java Programming Language Third Edition"
by Arnold, Gosling, and Holmes
(http://java.sun.com/docs/books/javaprog/thirdedition/).
. . . . . . . . . . . . . . . . . . . . . . .
January 30, 2001. This issue of the JDC Tech Tips covers
techniques for controlling access to Java(tm) packages. The
topics covered are:
* Controlling Package Access With Security Permissions
* Controlling Package Access With Sealed JAR Files
This tip was developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/TechTips/2001/tt0130.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CONTROLLING PACKAGE ACCESS WITH SECURITY PERMISSIONS
In the Java(tm) programming language, a package is an important
abstraction that supports object-oriented design. A package is
a group of classes that work together on some common task. For
example, the java.io package handles input/output, and the
java.net package handles network communications. Classes within
the same package have a special relationship that is enforced
by the language and the Java virtual machine*. If you declare
a class, method, or field without an explicit access modifier,
then it has default access, and is accessible to any class in
the same package:
package com.develop.widgets;
//Both Helper and widgetCount are visible only within the
//com.develop.widgets package.
class Helper {
static int widgetCount;
}
This level of access is invaluable for implementation details
that need to be shared across multiple classes, and because of
that, cannot be marked "private." However, default access only
provides encapsulation if you control all the code that is loaded
into your package. Otherwise, developers can accidentally (or
maliciously) add their own classes to your package, and gain
access to all the default-access classes, methods, and fields
inside of the package.
The Java platform provides several mechanisms to control package
access. Two of these are built-in permissions relating to package
access, and JAR sealing, which is supported by the
java.net.URLClassLoader class.
The Java security architecture defines two permission prefixes
that control the use of packages: the RuntimePermissions that
begin with "accessClassInPackage" and "defineClassInPackage."
The accessClassInPackage permissions control whether a class can
be loaded by a class loader, possibly by delegation to some other
loader. These permissions are checked in the loadClass method of
participating class loaders. The defineClassInPackage permissions
allow a class to be defined by a specific class loader, and would
probably be checked in the findClass method of a class loader.
Depending on the class loaders in use in your application, you
might need to specify both permissions. For example, if you want
to be able to access the classes in the java.net and
java.io packages from your application, you might specify a policy
file entry like this:
//correct, but unnecessary for reasons you will see momentarily
grant {
permission java.lang.RuntimePermission "accessClassInPackage.java.io";
permission java.lang.RuntimePermission "accessClassInPackage.java.net";
permission java.lang.RuntimePermission "defineClassInPackage.java.io";
permission java.lang.RuntimePermission "defineClassInPackage.java.net";
};
In each case, the suffix of the permission name is the name of
the package. However, something is obviously missing from this
story. Most Java policy files do not have any entries like these,
yet Java applications are able to access these and other packages.
The reason for this is that packages are not secured by default.
To secure them, you must edit the java.security file in your
${JAVA_HOME}/jre/lib/security folder:
#excerpted from java.security. Each property takes a comma delimited
#list of package prefixes. Security checks apply only to packages
#that begin with an exact string match of one of these prefixes.
package.access=sun.
#package.definition=
As you can see, the only packages that are access protected are
those prefixed with "sun." To see the protection in action, use
the following test class, which simply verifies that it can load
a class from a URL:
//place in file TestAccess.java
import java.net.*;
import java.security.*;
public class TestAccess {
public static void testCL(ClassLoader ldr, String name) {
try {
Class cl = ldr.loadClass(name);
System.out.println(ldr + " loaded " + cl);
}
catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
try {
if (args.length != 2) {
System.out.println("usage TestAccess URL class");
System.exit(-1);
}
URL[] urls = new URL[] {new URL(args[0])};
URLClassLoader ucl = URLClassLoader.newInstance(urls);
testCL(ucl, args[1]);
}
catch (Throwable t) {
t.printStackTrace();
}
}
}
//place in file Test.policy
grant {
permission java.io.FilePermission "<>", "read";
permission java.lang.RuntimePermission "createClassLoader";
};
Try running this class with the following command line (that is,
all on one line):
java -Djava.security.manager -Djava.security.policy=Test.policy
TestAccess file:someRandomURL/ sun.secret.Class
The java.security.manager flag activates the default security
manager. The java.security.policy property is set to Test.policy;
the permissions in Test.policy allow your code to create a class
loader and read from the local file system.
If you run the class, you should see the following exception:
java.security.AccessControlException: access denied
(java.lang.RuntimePermission accessClassInPackage.sun.secret)
It does not matter what URL you use, because the security check
against the package name occurs before the class loader tries to
access the URL. To give your code permission to access the
sun.secret package, add the following entry to your Test.policy
file:
permission java.lang.RuntimePermission "accessClassInPackage.sun.secret";
If you retry the class with the same command line as before, you
should see the more mundane error:
java.lang.ClassNotFoundException: sun.secret.Class
Of course, had there really been a sun.secret.Class, you would
now have permission to access it.
You can use the same mechanism to protect you own package names
by adding the package prefixes to either the package.access or
package.definition properties in the java.security file. You can
then limit access to trusted code by adding entries to the policy
file.
There are a few caveats to this technique. First, you must make
sure to use class loaders that participate in this part of the
security architecture. The common class loaders' implementations
of package security are summarized below:
----------------------------------------------------------
| Class Loader | Access Checks | Definition Checks |
| ---------------------------------------------------------|
| bootstrap | No | No |
| extensions | No | No |
| system | Yes | No |
| URLClassLoader | Maybe* | No |
----------------------------------------------------------
As you can see, none of the class loaders provided with the JDK
check for permission to define a class. If you want to add this
capability, you need to use a custom class loader that overrides
findClass. Also, beware the "Maybe" entry in the table for
URLClassLoader. If you construct a URLClassLoader directly, it
does not check if the package access is legal. But, if you use
the static method URLClassLoader.newInstance to create a
URLClassLoader, you receive a subclass of URLClassLoader that
does check package access.
The other caveat has to do with editing the java.security file.
The package name passed to the security manager does not have
a trailing "." and the string matching is completely literal.
As a result, you must be careful in deciding whether to use
a trailing period in the policy file. For example, the default
entry:
package.access=sun.
protects "sun.misc" and "sun.tools" but it would not protect the
sun package because there is no trailing period to match. But
if the entry were edited to read:
package.access=sun
it would protect the sun package, but also other packages with
names such as "sundance", "sung", and "sundry".
You can read more about RuntimePermissions at
http://java.sun.com/products/jdk/1.2/docs/guide/security/permissions.html#RuntimePermission
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CONTROLLING PACKAGE ACCESS WITH SEALED JAR FILES
The security mechanisms discussed in the tip "Controlling Package
Access With Security Permissions" approach the problem at
a fairly low level. Most developers do not need this level of
control, and simply want to guarantee that all classes in
a package come from the same code source. This common case is
supported by JAR "sealing." You can seal a JAR file by specifying
a true value for the sealed attribute, that is:
Sealed: true
Note that the default value of the sealed attribute is false.
After a class loader loads a class from a sealed JAR file,
classes in the same package can only be loaded from that JAR
file. You can override sealing on a per-package basis with
additional attributes listed after the package name. For
example, you can have a main section as follows (terminated
by blank line):
Sealed: false
And follow the main section with sub-sections that have per-entry
attributes:
Name: com/develop/impl/
Sealed: true
To see sealing in action, move the TestAccess class that was
introduced in the tip "Controlling Package Access With Security
Permissions" to a package named sealme:
//place in file sealme/TestAccess.java
package sealme;
//repeat TestAccess.java contents from above
Then, create a manifest that seals all packages. The file
should have the following lines and end with a blank line:
Manifest-Version: 1.0
Sealed: true
Create a sealed jar file with the jar tool:
jar cvmf manifest sealme.jar sealed/TestAccess.class
Now, create another class in a separate location from TestAccess.
Place it, say, in dynclass/sealme:
package sealme;
public class LoadMe {
}
Try running sealme.TestAccess from the jar file, using it to
dynamically load sealme.LoadMe:
java -cp sealme.jar sealme.TestAccess file:dynclass/ sealme.LoadMe
Because the jar file is sealed, you will not be able to load
sealMe.LoadMe from an alternate location. You should see an
exception like this:
java.lang.SecurityException: sealing violation
at java.net.URLClassLoader.defineClass(URLClassLoader.java:234)
(etc.)
Notice that package sealing works regardless of whether you have
installed a security manager. This is in marked contrast to the
RuntimePermissions discussed in the tip "Controlling Package Access
With Security Permissions." Where the permission-based solution is
flexible but difficult to configure, the package sealing approach is
simple, blunt, and easy to use. Unless you have a compelling reason
to do otherwise, you should consider sealed JAR files to be the
default mechanism for delivering JAVA code.
It is interesting to note that the "java.-" packages are not
protected by either of the schemes discussed above. The
documentation is confusing on this point. For example, the
documentation for the defineClassInPackage permission states that
"[granting] this is dangerous because malicious code with this
permission may define rogue classes in trusted packages like
java.security or java.lang, for example."
This is doubly wrong. First, the defineClassInPackage permission
is not checked by any class loader supplied with the JDK. Second,
as of JDK 1.3, the "java" packages are protected by a line that is
hard coded into the ClassLoader class:
//from java.lang.ClassLoader.defineClass
if ((name != null) && name.startsWith("java.")) {
throw new SecurityException("Prohibited package name: " +
name.substring(0, name.lastIndexOf('.')));
}
Obviously Sun takes the protection of the core API packages very
seriously! You should take similar care when deploying own Java
packages.
JAR sealing is part of the Java extensions mechanism. For more
information, see
http://java.sun.com/products/jdk/1.2/docs/guide/extensions/index.html
. . . . . . . . . . . . . . . . . . . . . . .
January 9, 2001. This issue covers:
* Using the java.lang.Character Class
* Handling Uncaught Exceptions
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/JDCTechTips/2001/tt0109.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING THE JAVA.LANG.CHARACTER CLASS
java.lang.Character is a wrapper class for the primitive type
char. Like other wrappers such as Integer, it is used to
represent primitive values in object form, so that collection
classes that only know about Object references can manipulate
char values. The Character class is also used to group together
methods and constants used in handling Unicode characters.
This tip looks at some of the ways you can use Character.
The first example shows how the class is used as a wrapper:
import java.util.*;
public class CharDemo1 {
public static void main(String args[]) {
List list = new ArrayList();
list.add(new Character('a'));
list.add(new Character('b'));
list.add(new Character('c'));
for (int i = 0; i < list.size(); i++) {
System.out.println(list.get(i));
}
}
}
In this example, three Character objects representing the letters
a, b, and c are added to an ArrayList. Then the contents of the
list are displayed.
The Character class contains a lot of "isX" methods, such as
"isDigit". You might think that these methods aren't really
necessary, because it's simpler to say something like:
if (c >= '0' && c <= '9')
...
if you want to test whether a character is a digit. This code
actually works in some contexts, but it has a big problem. It
doesn't account for the fact that Java uses the Unicode
character set rather than the ASCII character set. For example,
if you run this program:
public class CharDemo2 {
public static void main(String args[]) {
int dig_count = 0;
int def_count = 0;
for (int i = 0; i <= 0xffff; i++) {
if (Character.isDigit((char)i)) {
dig_count++;
}
if (Character.isDefined((char)i)) {
def_count++;
}
}
System.out.println("number of digits = " + dig_count);
System.out.println("number of defined = " + def_count);
}
}
it reports that the Unicode character set contains 159 characters
that are classified as digits.
This example also illustrates another interesting point: not all
possible Unicode character values have meaning. The program
reports that Character.isDefined returns true for 47400 of 65536
characters.
Another place where the Character class is useful is in
converting from upper case characters to lower case characters.
Here's an example:
public class CharDemo3 {
public static void main(String args[]) {
char cupper = 'A';
char clower;
// convert to lower case using the ASCII convention
clower = (char)(cupper + 0x20);
System.out.println("cupper #1 = " + cupper);
System.out.println("clower #1 = " + clower);
System.out.println();
// convert to lower case using Character.toLowerCase()
clower = Character.toLowerCase(cupper);
System.out.println("cupper #2 = " + cupper);
System.out.println("clower #2 = " + clower);
}
}
If you've used the ASCII character set, it's common to convert to
lower case by adding 0x20 (decimal 32) to an upper case letter.
This approach works in the demo program, but again fails to take
into account the Unicode character set. The key obstacle is that
in Unicode, upper and lower case equivalents aren't guaranteed to
be exactly 0x20 values apart. So in this situation, it's
preferable to use the toLowerCase method of the Character class.
The Character class also contains several methods for converting
between character and integer values. These are used, for example,
in Integer.parseInt, to convert number strings in a specified base
into integers, as is the case in the following statement:
Integer.parseInt("-ff", 16) == -255
Here's a program that illustrates these methods:
public class CharDemo4 {
public static void main(String args[]) {
// return the numeric value of 'z' considered as
// a digit in base 36
int dig = Character.digit('z', 36);
System.out.println(dig);
// return the character value for the
// specified digit in base 36
char cdig = Character.forDigit(dig, 36);
System.out.println(cdig);
// return the numeric value of \u217c
int rn50 = Character.getNumericValue('\u217c');
System.out.println(rn50);
}
}
Character.digit returns the numeric value of a character
considered as a digit in a given radix. So, for example, in base
36, digits have the values 0-9 and a-z, and thus 'z' has the
value 35.
Character.forDigit reverses the process; the appropriate digit
as a character for the value 35 in base 36 is 'z'.
Character.getNumericValue returns the numeric value of
a character digit, using the value specified in an internal table
called the Unicode Attribute Table. For example, the Unicode value
\U217C is the Roman Numeral "L", which has a value of 50.
The Unicode Attribute Table is also used to specify the type of
a Unicode character. Types are categories such as punctuation,
currency symbols, letters, and so on. Here's a simple program that
displays the hexadecimal values of all the characters classified
as currency symbols:
public class CharDemo5 {
public static void main(String args[]) {
for (int i = 0; i <= 0xffff; i++) {
if (Character.getType((char)i) ==
Character.CURRENCY_SYMBOL) {
System.out.println(Integer.toHexString(i));
}
}
}
}
There are 27 such symbols. The first one listed, 0x24, corresponds
to the familiar '$' character.
A final example of how you can use the Character class has to do
with Unicode character blocks. These blocks are used to group
related characters. Some examples are BASIC_LATIN, ARABIC,
GEORGIAN, ARROWS, and KANBUN. Here's a demo program that prints
all character values in the GREEK character block:
public class CharDemo6 {
public static void main(String args[]) {
for (int i = 0; i <= 0xffff; i++) {
if (Character.UnicodeBlock.of((char)i) ==
Character.UnicodeBlock.GREEK) {
System.out.println(Integer.toHexString(i));
}
}
}
}
To learn more about java.lang.Character, see section 11.1.3
Character, and Table 7 Unicode Character Blocks in Appendix B
Useful Tables in "The Java Programming Language Third Edition"
by Arnold, Gosling, and Holmes"
(http://java.sun.com/docs/books/javaprog/thirdedition/).
HANDLING UNCAUGHT EXCEPTIONS
If you've done much programming in the Java(tm) programming
language, you've probably encountered applications that terminate
abnormally with an uncaught exception. Here's a program that does
just that:
public class ExcDemo1 {
public static void main(String args[]) {
int vec[] = new int[10];
vec[10] = 37;
}
}
In this example, the program terminates abnormally due to an
uncaught exception. The program throws an exception because of an
illegal array access to vec[10] (vec has valid array indexes
of 0-to-9).
Before examining some techniques for handling uncaught exceptions,
let's look at the rules for how the Java(tm) Virtual Machine*
terminates a program. The first rule is that an uncaught
exception terminates the thread in which it occurs. The second
rule is that a program terminates when there are no more user
threads available. Here's an example:
class MyThread extends Thread {
public void run() {
try {
Thread.sleep(5 * 1000);
}
catch (InterruptedException e) {
System.err.println(e);
}
System.out.println("MyThread thread still alive");
}
}
public class ExcDemo2 {
public static void main(String args[]) {
new MyThread().start();
int vec[] = new int[10];
vec[10] = 37;
}
}
The main thread shuts down almost immediately, due to an unhandled
exception. But there's an instance of MyThread that remains active
for approximately five seconds, and completes normally.
So how do you handle uncaught exceptions? The first approach is
very simple -- you put a try...catch block around the top-level
method that invokes the application:
public class ExcDemo3 {
static void app() {
int vec[] = new int[10];
vec[10] = 37;
}
public static void main(String args[]) {
try {
app();
}
catch (Exception e) {
System.err.println("uncaught exception: " + e);
}
}
}
All the exceptions that an application typically tries to catch
are subclasses of java.lang.Exception, and so are caught by this
technique. This excludes exceptions like OutOfMemoryException,
which are descendants of java.lang.Error. If you really want to
catch everything (not necessarily a good idea), you need to use
a "catch (Throwable e)" clause.
What if you want to extend this technique to multiple threads?
An obvious approach is to say:
class MyThread extends Thread {
public void run() {
int vec[] = new int[10];
vec[10] = 37;
}
}
public class ExcDemo4 {
public static void main(String args[]) {
try {
new MyThread().start();
}
catch (Exception e) {
System.err.println("uncaught exception: " + e);
}
}
}
Unfortunately, this approach doesn't actually work -- it simply
catches exceptions caused by the start method itself, namely
IllegalThreadStateException which is thrown when the thread has
previously been started.
So it's necessary to get a little more sophisticated, and
override the uncaughtException method in the ThreadGroup class.
A ThreadGroup object represents a group of threads. There is
a method in ThreadGroup that is called when a thread within the
group is about to die because of an uncaught exception.
Here's what the code looks like:
class MyThreadGroup extends ThreadGroup {
public MyThreadGroup(String s) {
super(s);
}
public void uncaughtException(Thread t, Throwable e) {
System.err.println("uncaught exception: " + e);
//super.uncaughtException(t, e);
}
}
class MyThread extends Thread {
public MyThread(ThreadGroup tg, String n) {
super(tg, n);
}
public void run() {
int vec[] = new int[10];
vec[10] = 37;
}
}
public class ExcDemo5 {
public static void main(String args[]) {
ThreadGroup tg = new MyThreadGroup("mygroup");
Thread t = new MyThread(tg, "mythread");
t.start();
}
}
The code example creates a subclass of ThreadGroup, and
overrides the uncaughtException method. This overridden method is
called for a dying thread; the thread object and exception are
passed as parameters to the method.
By default, uncaughtException invokes the uncaughtException method
on the thread group's parent group object. If there is no such
group, the exception's printStackTrace method is called to display
a stack trace. You can see what the default behavior looks like by
commenting the "System.err.println" line and uncomment the
"super.uncaughtException(t, e)" line.
Further reading: sections 10.12 Thread and Exceptions, and
18.3 Shutdown in "The Java Programming Language Third Edition"
by Arnold, Gosling, and Holmes"
(http://java.sun.com/docs/books/javaprog/thirdedition/).
. . . . . . . . . . . . . . . . . . . . . . .
December 22, 2000. This issue covers techniques for tracking and
controlling memory allocation in the Java HotSpot(tm) Virtual
Machine*. The topics covered are:
* A Memory Testbed Application
* Controlling Your Memory Manager
This tip was developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/TechTips/2000/tt1222.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
A MEMORY TESTBED APPLICATION
Memory management can have a dramatic effect on performance, and
most virtual machines expose a set of configuration options that
you can tweak for the best possible performance of your
application on a particular platform. To investigate this, let's
examine a simple application for allocating and unreferencing
blocks of memory. This application will be used in the second tip
("Controlling Your Memory Manager") to demonstrate some of the
configuration options in the VM for memory management.
import java.io.*;
import java.util.*;
public class MemWorkout {
private static final int K = 1024;
private int maxStep;
private LinkedList blobs = new LinkedList();
private long totalAllocs;
private long totalUnrefs;
private long unrefs;
public String toString() {
return "MemWorkout allocs=" + totalAllocs + " unrefs=" + totalUnrefs;
}
private static class Blob {
public final int size;
private final byte[] data;
public Blob(int size) {
this.size = size;
data = new byte[size];
}
}
private void grow(long goal) {
long totalGrowth = 0;
long allocs = 0;
while (totalGrowth < goal) {
int grow = (int)(Math.random() * maxStep);
blobs.add(new Blob(grow));
allocs++;
totalGrowth += grow;
}
totalAllocs += allocs;
System.out.println("" + allocs + " allocs, " + totalGrowth + " bytes");
}
private void shrink(long goal) {
long totalShrink = 0;
unrefs = 0;
try {
while (totalShrink < goal) {
totalShrink += shrinkNext();
}
} catch (NoSuchElementException nsee) {
System.out.println("all items removed");
}
totalUnrefs+= unrefs;
System.out.println("" + unrefs + " unreferenced objs, " + totalShrink + " bytes");
}
private long shrinkNext() {
//choice of FIFO/LIFO very important!
Blob b = (Blob) blobs.removeFirst();
//Blob b = (Blob) blobs.removeLast();
unrefs++;
return b.size;
}
public MemWorkout(int maxStep) {
this.maxStep = maxStep;
}
public static void main(String [] args) {
if (args.length < 1) {
throw new Error ("usage MemWorkout maxStepKB");
}
int maxStep = Integer.parseInt(args[0]) * K;
if (maxStep < (K)) throw new Error("maxStep must be at least 1KB");
MemWorkout mw = new MemWorkout(maxStep);
try {
while (true) {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
logMemStats();
System.out.println("{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits");
String s = br.readLine();
if (s.equals("GC")) {
System.gc();
System.runFinalization();
continue;
}
long alloc = Integer.parseInt(s) * 1024* 1024;
if (alloc > 0) {
mw.grow(alloc);
} else {
mw.shrink(-alloc);
}
}
} catch (NumberFormatException ne) {
} catch (Throwable t) {
t.printStackTrace();
}
System.out.println(mw);
}
public static void logMemStats() {
Runtime rt = Runtime.getRuntime();
System.out.println("total mem: " + (rt.totalMemory()/K) +
"K free mem: " + (rt.freeMemory()/K) + "K");
}
}
To run MemWorkout, specify it with a number argument, like this:
java MemWorkout 5
In response, you should see something like this:
total mem: 1984K free mem: 1790K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
The first line of output indicates the total available memory and
the total amount of free memory. The second line is a prompt.
You can respond to the prompt in one of four ways. If you enter
a positive number, MemWorkout loads the system with approximately
that many megabytes by adding "Blob" objects to a blobs list.
The size of a Blob is a random number between 0 and the value of
the initial argument you specified to MemWorkout, in kilobytes.
So for the value 5, the size of each new Blob added to the list
is 5 kilobytes.
If you enter a negative number in response to the prompt,
MemWorkout attempts to unload the system of that amount of
megabytes by removing Blobs from the list.
You can also enter GC to run System.gc() and
System.runFinalization(), or EXIT to exit the application.
For example, a MemWorkout session that adds 50MB of load, drops
25MB, and then collects garbage would look something like this:
java MemWorkout 5
total mem: 1984K free mem: 1790K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
50
20617 allocs, 52430544 bytes
total mem: 64320K free mem: 11854K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
-25
10312 unreferenced objs, 26216866 bytes
total mem: 64320K free mem: 11828K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
GC
total mem: 65280K free mem: 38976K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
EXIT
MemWorkout allocs=20617 unrefs=10312
This session exercises the Java HotSpot(tm) Client VM, which is
part of Java 2 SDK, Standard Edition, v 1.3. The session
demonstrates several interesting things about the HotSpot VM.
First, notice that total memory increases immediately to meet
the 50MB allocation. Second, notice that free memory is not
immediately reclaimed when 25MB worth of objects are removed.
Instead the free memory is reclaimed when the garbage collector
is requested through System.gc(). The configuration options
described in the next tip ("Controlling Your Memory Manager")
give you several choices for controlling these behaviors.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CONTROLLING YOUR MEMORY MANAGER
Garbage collection performance can be very important to the
overall performance of an application written in the Java
programming language. The most primitive memory management
schemes use a "stop-the-world" approach, where all other
activity in the VM must halt while all objects in the system are
scanned. This can cause a noticeable pause in program execution.
Even when delays do not come in large, user-irritating chunks,
the overall time spent collecting garbage can still impact
performance. This tip uses the MemWorkout class to demonstrate
the following memory management flags. You can use these flags
to tune the garbage collection performance of the HotSpot VM
in the Java 2 SDK v 1.3:
Flags Purpose
--------------------- -------------------------------------
-Xms and -Xmx Control system memory usage
-verbose:gc Trace garbage collection activity
-XX:NewSize Control the nursery starting size
-Xincgc and -Xnoincgc Turn on, or off, incremental garbage
collection
***Warning*** The -X flags are non-standard options, and are
subject to change in future releases of the Java 2 SDK. Also,
the -XX flag is officially unsupported, and subject to change
without notice.
Perhaps the most crucial setting in memory management is the
maximum total memory allowed to the VM. If you set this lower than
the maximum memory needed by the VM, your application will fail
with an OutOfMemoryError exception. More subtly, if you set the
maximum memory too close to your application's memory usage,
performance might degrade significantly. (Although there are
many different garbage collection algorithms, most perform poorly
when memory is almost full.) The HotSpot VM default for initial
memory allocation is 2MB. By default, HotSpot gradually increases
memory allocation up to 64MB; any memory request above 64MB fails.
You can control the initial memory setting with the -Xms flag, and
control the maximum setting with the -Xmx flag. Try these flags
out in a MemWorkout session. (The MemWorkout class is described in
the previous tip, "A Memory Testbed Application.") Start MemWorkout
as follows:
java MemWorkout 5
Then respond to the MemWorkout prompt with the number 32, this
means allocate 32MB. After the response to the entry, enter 32
again, for another 32MB allocation. (To keep the text short, the
rest of the MemWorkout sessions in this tip will list only the
command line entries. So, for example, a MemWorkout session with
two entries of 32 will be abbreviated to "32,32".) Running
MemWorkout this way should generate an OutOfMemoryError exception
because the two 32MB allocations, plus the overhead of the
application and VM, are easily greater than 64MB.
To fix this problem, try MemWorkout again, but this time specify
the -Xmx flag as follows:
java -Xmx80m MemWorkout 5
Then run the session as before, that is, "32,32". The 80m argument
indicates that the VM can use a maximum of 80MB. This time
MemWorkout should succeed.
If you know that the memory footprint of your application remains
almost constant for the life of the application, specify a
starting memory allocation that is higher than the starting
default. You specify the starting allocation with the -Xms flag.
This saves the startup overhead of working up from 2MB. The
following command specifies both a starting and maximum allocation
of 80MB. This guarantees that the virtual machine will grab 80MB
of system memory at startup and keep it for the lifetime of the
application:
java -Xms80m -Xmx80m MemWorkout 5
The -verbose:gc flag causes the VM to log garbage collection
activity. Instead of guessing when and how your program interacts
with the garbage collector, you can use this flag to track it. Try
running MemWorkout with the -verbose:gc flag, as follows:
java -verbose:gc MemWorkout 5
Then run the session as before, that is, "32,32". You should see
trace output from the garbage collector similar to this:
total mem: 1984K free mem: 1790K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
32
[GC 508K->432K(1984K), 0.0128943 secs]
[GC 940K->939K(1984K), 0.0061460 secs]
[GC 1450K->1450K(1984K), 0.0057276 secs]
[GC 1959K->1959K(2496K), 0.0056435 secs]
[Full GC 2471K->2471K(3772K), 0.0276593 secs]
etc.
You'll probably see many indications of garbage collection,
indicated by [GC ...]. You might wonder why so many garbage
collections are done. The answer is that before the virtual
machine asks for more memory from the system, it tries to reclaim
some of the memory it already has. It does this by running the
garbage collector. If you run the same application with 80MB
preallocated, as in the following example, some of the calls to
the garbage collector should disappear:
java -verbose:gc -Xms80m -Xmx80m MemWorkout 5
total mem: 81664K free mem: 81470K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
32
[GC 2046K->1970K(81664K), 0.0240181 secs]
etc...
[GC 32669K->32669K(81664K), 0.0220730 secs]
13200 allocs, 33558101 bytes
This time you should see fewer garbage collections. Also, you
should not see any full garbage collections (indicated by
[FULL GC...]). Full garbage collections tend to be the most
expensive in terms of performance.
Intuitively, garbage collection should run when memory is low.
Because the MemWorkout application above starts with 80MB and
only allocates 32MB, the VM is never low on memory. So why are
there still some calls to the garbage collector? The answer is
that the HotSpot VM collector is generational. Generational
collectors take advantage of the reasonable assumption that
young objects are likely to die soon (think local variables).
So instead of collecting all of memory, generational collectors
divide memory into two or more generations. When the youngest
generation, or "nursery," is nearly full, a partial garbage
collection is done to reclaim some of the young objects that are
no longer reachable. This partial garbage collection is usually
much faster than a full garbage collection; it postpones the need
for a full gc. Generational gc can dramatically reduce both the
duration and frequency of full gc pauses.
The initial size of the object nursery is configurable; the
documentation often refers to it as the "eden space." On
a SPARCstation, the new generation size defaults to 2.125MB;
on an Intel processor, it defaults to 640k. Try to configure
MemWorkout so that it runs without any need for garbage
collection. To do that, make the nursery large enough so that
the entire application usage fits easily in the nursery. The
session should look something like this:
java -verbose:gc -Xms80m -Xmx80m -XX:NewSize=60m MemWorkout 5
total mem: 75776K free mem: 75582K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
32
13118 allocs, 33555324 bytes
total mem: 75776K free mem: 42073K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
The -XX:NewSize flag sets the initial nursery size to 60MB. This
accomplishes the objective; the lack of gc trace output indicates
that the nursery never needed collection. Of course, it is
unlikely that you would ever set the nursery so large. Like every
good thing in life, the size of the nursery involves a painful
tradeoff. If you make the nursery too small, objects get moved
into older generations too quickly, clogging the older generations
with dead objects. This situation forces a full gc earlier than
would otherwise be needed. But a large nursery causes longer
pauses, eventually approaching the length of a full gc. There is no
magic formula. Use -verbose:gc to observe the memory behavior of
your application, and then make small, incremental changes to the
nursery size and measure the results. Remember too that HotSpot
is adaptive and will dynamically adjust the nursery size in
long-running applications.
In addition to being generational, the HotSpot VM can also run in
incremental mode. Incremental gc divides the entire set of objects
into smaller sets, and then processes these sets incrementally.
Like generational gc, incremental gc aims to make pause times
smaller by avoiding long pauses to trace most or all objects.
However, incremental gc's advantages accrue regardless of the age
of the object. The disadvantage of incremental gc is that even
though collection is divided into smaller pauses, the overall cost
of garbage collection can be substantially higher, causing
throughput to decrease. This tradeoff is worthwhile for
applications that must make response time guarantees, such as
applications that have user interfaces. Incremental gc defaults to
"off." You can turn it on with the -Xincgc flag. To see incremental
gc in action, try a MemWorkout session that begins by adding 32MB,
and then adds and unreferences several fairly small chunks:
java -verbose:gc -Xms80m -Xmx80m -Xincgc MemWorkout 5
total mem: 640K free mem: 446K
{intMB} allocates, {-intMB} deallocates, GC collects garbage, EXIT exits
32
[GC 511K->447K(960K), 0.0086260 secs]
[GC 959K->964K(1536K), 0.0075505 secs]
(many more GC pauses!)
Notice that the initial 32MB allocation of system memory causes
a large number of incremental gc pauses. However, while there are
more pauses, they are an order of magnitude faster than the other
gc pauses you probably have seen. The pauses should be down in the
millisecond range instead of the tens of milliseconds. Also notice
that occasionally, unreferencing will appear to cause an
incremental gc. This happens because unlike the other forms of
garbage collection, incremental gc does not run primarily when
memory is full (or when a segment of memory such as the nursery
is almost full). Instead, incremental gc tries to run in the
background when it sees an opportunity.
Tuning the memory management of the HotSpot VM is a complex task.
HotSpot learns over time, and adjusts its behavior to get better
performance for your specific application. This is an excellent
feature, but it also makes it more difficult to evaluate the
output from simple benchmarks such as the MemWorkout class
presented in this tip. To gain a real understanding of HotSpot's
interactions with your code, you need to run tests that
approximate your application's behavior, and run them for long
periods of time.
This tip has shown just a sampling of the memory settings
available for the HotSpot VM. For further information about
HotSpot VM settings, see the Java(tm) HotSpot VM Options page
(http://java.sun.com/j2se/docs/VMOptions.html). Also see the
HotSpot FAQ (http://java.sun.com/docs/hotspot/PerformanceFAQ.html).
The book "Java(tm) Platform Performance: Strategies and Tactics"
by Steve Wilson and Jeff Kesselman
(http://java.sun.com/jdc/Books/performance/) includes two
appendixes that are valuable in learning more about memory
management. One appendix gives an overview of garbage collection;
the second introduces the HotSpot VM.
Richard Jones and Rafael Lins's "Garbage Collection" page
(http://www.cs.ukc.ac.uk/people/staff/rej/gc.html)
provides a good survey of gc algorithms, and a gc biliography.
. . . . . . . . . . . . . . . . . . . . . . .
November 28, 2000. This issue covers:
* Using Privileged Scopes
* Debugging Class Loading
Underlying both tips is the use of the boot class path. In the
first tip, you use the boot class path and privileged scopes to
add a simple logging feature to the Java(tm) security
architecture. In the second tip, you use the boot class path
and other techniques to debug class loading.
This tip was developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/TechTips/2000/tt1128.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING PRIVILEGED SCOPES
The core Java security architecture is based on granting
permissions to code based on where the code is located. In the
Java(tm) 2 SDK, Standard Edition, starting with version 1.2,
these permissions are configured by editing a text file called
the policy file. For example, if you wanted to grant permission
to read and write files in a "temp" subdirectory off the root,
you might use a policy file like this:
//file my.policy
grant {
permission java.io.FilePermission "${/}temp${/}-", "read,write";
};
The grant block begins by specifying the location of the code that
should be granted permissions. If there were such a specification
in the example, it would go before the ${/}. But there is no
location specified. So the grant applies to all code. Inside the
braces are a list of permissions. In the example, the
FilePermission syntax gives permission to read and write files in
the temp subdirectory and all its subdirectories. The special ${/}
syntax will be replaced by the path separator on the local platform.
To verify that this policy file works correctly, compile and
execute the following java class:
import java.io.*;
public class TestPolicy {
public static void main(String [] args) {
tryToRead("/temp/foo.txt");
tryToRead("/qwyjibo/foo.txt");
}
public static void tryToRead(String fileName) {
try {
FileInputStream fis = new FileInputStream(fileName);
}
catch (SecurityException se) {
System.out.println("Didn't have permission to read " + fileName);
se.printStackTrace();
return;
}
catch (Exception e) {
//don't really care if the file was there, just checking
//if we would have been allowed to read it
}
System.out.println("Granted permission to read " + fileName);
}
}
To execute the program with security active and referencing your
policy file, you will need to use the command line
java -Djava.security.manager -Djava.security.policy=my.policy TestPolicy
If everything works as expected, you should see output similar to
this:
Granted permission to read /temp/foo.txt
Didn't have permission to read /qwyjibo/foo.txt
java.security.AccessControlException: access denied (java.io.FilePermission qwyjibo/foo.txt read)
at java.security.AccessControlContext.checkPermission(Unknown Source)
at java.security.AccessController.checkPermission(Unknown Source)
at java.lang.SecurityManager.checkPermission(Unknown Source)
at java.lang.SecurityManager.checkRead(Unknown Source)
at java.io.FileInputStream.(Unknown Source)
at TestPolicy.tryToRead(TestPolicy.java:10)
at TestPolicy.main(TestPolicy.java:6)
Permission checks work by checking the entire call stack. Every
class on the call stack must have the requisite permission, or
the security check fails. This is based on the assumption that
the security manager has no special knowledge of your code, and
has to assume that any untrusted code, anywhere on the stack,
might be a threat. In the exception output above, all of the
classes that begin with "java" are part of the core API and pass
all security checks. The only problem is the TestPolicy class,
which does not have permission to access files in the "/qwyjibo"
directory.
Now, imagine that you wanted to keep an audit log of all failed
file reads. To do this, you might extend the normal
SecurityManager as follows:
//ATTENTION: compile this into a subdirectory named 'boot'
import java.io.*;
import java.security.*;
public class LoggingSM extends SecurityManager {
public void checkRead(String name) {
try {
super.checkRead(name);
}
catch(SecurityException se) {
log(name, se);
throw se;
}
}
public void log(String name, Exception se) {
try {
FileOutputStream fos = new FileOutputStream("security.log");
PrintStream ps = new PrintStream(fos);
ps.println("failed attempt to read " + name);
se.printStackTrace(ps);
}
catch (Exception e) {
System.out.println("uh-oh, the log is busted somehow");
e.printStackTrace();
}
}
}
This subclass of SecurityManager calls the default
implementation's checkRead method, but catches the exception and
logs it before throwing it back to the client.
As the comment in LoggingSM states, compile the class into a
subdirectory named "boot." After you compile the class, you can use
it as your SecurityManager by specifying its name on the command
line like this:
java -Xbootclasspath/a:boot/ -Djava.security.manager=LoggingSM\
-Djava.security.policy=my.policy TestPolicy
The addition of the -Xbootclasspath/a: flag appends the "boot"
subdirectory to the bootstrap class path. This causes the LoggingSM
class to be loaded by the bootstrap class loader, so that the
class will not fail security checks. When you run this command,
you would like to see the failed file read appear in the
security.log file. Unfortunately, this doesn't happen. Instead,
you get a console report that notes the expected security failure,
and indicates that the log failed to work. You should see something
similar to this:
uh-oh, the log is busted somehow
java.security.AccessControlException: access denied (java.io.FilePermission security.log write)
at java.security.AccessControlContext.checkPermission(Unknown Source)
at java.security.AccessController.checkPermission(Unknown Source)
at java.lang.SecurityManager.checkPermission(Unknown Source)
at java.lang.SecurityManager.checkWrite(Unknown Source)
at java.io.FileOutputStream.(Unknown Source)
at java.io.FileOutputStream.(Unknown Source)
at LoggingSM.log(LoggingSM.java:16)
at LoggingSM.checkRead(LoggingSM.java:10)
at java.io.FileInputStream.(Unknown Source)
at TestPolicy.tryToRead(TestPolicy.java:9)
at TestPolicy.main(TestPolicy.java:5)
The call stack clearly illustrates the problem. Because the
untrusted TestPolicy class was on the call stack, the attempt to
open the FileInputStream throws a SecurityException. But, when
the LoggingSM class attempts to write to the log, the mischievous
TestPolicy class is still on the stack. So, the SecurityManager
blindly rejects the attempt to write the log. What this situation
calls for is some way for LoggingSM to insist "I know what I am
doing when I open the log file, so there is no need to check the
call stack any further."
The AccessController.doPrivileged() method neatly solves the
problem. When you place a block of code inside a doPrivileged
method, you are asserting that, based on your knowledge of the
code, you are confident that it is safe for the operation to
proceed without any additional security checks. Note that you are
not turning off security entirely -- the code that calls the
AccessController must still pass its own security check. (This is
why you added LoggingSM to the boot class path.) To fix the log so
that it uses a privileged block, replace the log method as
follows:
private void log(String name, Exception se) {
try {
FileOutputStream fos = (FileOutputStream)
AccessController.doPrivileged(new PrivilegedExceptionAction() {
public Object run() throws PrivilegedActionException {
try {
return new FileOutputStream("security.log");
} catch (IOException ioe) {
throw new PrivilegedActionException(ioe);
}
}
});
PrintStream ps = new PrintStream(fos);
ps.println("failed attempt to read " + name);
se.printStackTrace(ps);
}
catch (Exception e) {
System.out.println("uh-oh, the log is busted somehow");
e.printStackTrace();
}
}
The doPrivileged method executes the run method of the anonymous
inner subclass of PrivilegedExceptionAction. When a security check
is necessary, it stops walking back up the call stack after it hits
this block of code.
Recompile LoggingSM into the "boot" subdirectory.
Now you can run the application with the command line
java -Djava.security.manager=LoggingSM -Djava.security.policy=my.policy\
-Xbootclasspath/a:boot/ TestPolicy
This time, the LoggingSM will be able to write to the file
system, so after the program runs the security.log file will
have a correct report of security failures that occurred. If you
have trouble getting the example to work, try adding the
"-Djava.security.debug=all" flag on the command line. This flag
produces exhaustive trace output of the security system.
For more information about privileged scopes, see "API for
Privileged Blocks" at
http://java.sun.com/j2se/1.3/docs/guide/security/doprivileged.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DEBUGGING CLASS LOADING
The October 31, 2000 edition of the Tech Tips included a quick
overview of the ClassLoader architecture. Unfortunately, even
after you have a good understanding of the architecture it is
easy to get lost when debugging a complex system with multiple
class loaders. This tip will help you troubleshoot common
class loading problems. Begin by compiling these classes:
public class LoadMe {
static {
System.out.println("Yahoo! I got loaded");
}
}
import java.net.*;
public class Loader {
public static void main(String [] args) throws Exception
{
URLClassLoader uclMars = new URLClassLoader(new URL[]{new URL("file:mars/")});
URLClassLoader uclVenus = new URLClassLoader(new URL[]{new URL("file:venus/")});
Class mars = Class.forName("LoadMe", true, uclMars);
Class venus = Class.forName("LoadMe", true, uclVenus);
System.out.println("(Venus version == mars version) == " + (mars == venus));
}
}
Before running the Loader class, create three copies of the
compiled LoadMe.class file: one in the same directory as Loader,
one in a "mars" subdirectory, and one in a "venus" subdirectory.
The objective of this test is to load two different versions
of the same class. (Before reading further, see if you can
determine why this isn't going to work.) When you run the Loader
class, you will see the following output:
Yahoo! I got loaded
(Venus version == mars version) == true
Contrary to the plan, the mars and venus versions of the class
are the same. A first step to debugging this is to use the
-verbose:class flag on the command line:
java -verbose:class Loader
[Opened E:\Program Files\JavaSoft\JRE\1.3\lib\rt.jar]
[Opened E:\Program Files\JavaSoft\JRE\1.3\lib\i18n.jar]
[Opened E:\Program Files\JavaSoft\JRE\1.3\lib\sunrsasign.jar]
[Loaded java.lang.Object from E:\Program Files\JavaSoft\JRE\1.3\lib\rt.jar]
...
[Loaded Loader]
[Loaded LoadMe]
You should see several screens of output listing all the classes
as the VM* loads them. For classes loaded by the bootstrap class
loader, this output will show you exactly what JAR file the class
came from. This information alone should quickly resolve many
class loader problems. For example, it would help you identify
the fact that you are accidentally running with your JAVA_HOME
environment variable pointing to another installed copy of the
Java(tm) platform. Unfortunately, the output does not contain
enough information to solve the LoadMe problem. Although the
output clearly shows that only one copy of the LoadMe class was
loaded, it does not show where the class came from.
To get even more information, you can install a version of
URLClassLoader that logs every class. In order to do this, you
need to recompile java.net.URLClassLoader, and then order the VM
to use your "hacked" version. (Using a hacked version of a core
API class should be used for debugging purposes only and to
explore the VM.)
Here is a replacement of URLClassLoader with logging added:
//extract java.net.URLClassLoader from src.jar in your JDK directory
//to a "boot" subdirectory. Insert the following method and recompile
protected Class loadClass(String name, boolean resolve)
throws ClassNotFoundException
{
Class cls = null;
try {
cls = super.loadClass(name, resolve);
return cls;
}
finally {
System.out.print("Class " + name);
if (cls == null) {
System.out.println(" could not be loaded by " + this);
} else {
ClassLoader cl = cls.getClassLoader();
if (cl == this) {
System.out.println(" loaded by " + cl);
} else {
System.out.println(" requested by " + this + ", loaded by " + cl);
}
}
}
}
Notice the comment in the URLClassLoader replacement. First
extract java.net.URLClassLoader from src.jar in your JDK directory
to a "boot" subdirectory. Insert into it the loadClass method.
Then recompile URLClassLoader.
Notice that the logging method is explicit about class loader
delegation. If one class loader is asked for a class, but its
parent class loader returns the class first, the output reports
both class loaders.
Now, you can use the "prepend" version of the bootclasspath
flag to force this version of URLClassLoader to be loaded instead
of the normal one:
java -Xbootclasspath/p:boot/ Loader
If you search the console output for the string "LoadMe" you
should find something like this:
Class LoadMe loaded by sun.misc.Launcher$AppClassLoader@404536
Class LoadMe requested by java.net.URLClassLoader@5d87b2, \
loaded by sun.misc.Launcher$AppClassLoader@404536
This output immediately identifies the problem. The LoadMe class
is not loaded by the URLClassLoader because it is already visible
on the CLASSPATH; it is represented here by a member class of
sun.misc.Launcher. To fix the bug, remove the copy of the LoadMe
class from the main project directory.
This is only one example of how you can use a custom version of
a core API class to aid debugging. You can use the boot class path
anywhere you need to inject debugging code into the core API.
But you need to understand exactly what you are doing -- a
defective version of a core API class can compromise the entire
VM. Also, the the license forbids shipping a modified core class.
As mentioned earlier, you should use this technique only to
debug applications and explore the VM, never to ship code to
a customer.
For more on using the bootclasspath, see the white paper
"Using the BootClasspath" by Ted Neward at
http://www.javageeks.com/Papers/BootClasspath/
. . . . . . . . . . . . . . . . . . . . . . .
November 7, 2000. This issue covers:
* Using Random Numbers for Testing and Simulation
* Collection Utilities
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt1107.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING RANDOM NUMBERS FOR TESTING AND SIMULATION
The Java(tm) standard library provides a random number class,
java.util.Random. This tip examines some aspects of using random
numbers in Java programming. The tip starts with some fundamentals
of using random numbers, and then presents a longer example at the
end.
Random numbers, as found in programming languages, are really
"pseudo" random numbers. They're not random in the same sense as
physical phenomena such as thermal noise or background radiation.
However it's interesting to note that truly random number
generators, ones that are hardware-based, are starting to appear
on the market. Though software-generated random numbers are not
really random, it's possible to generate random numbers in such
a way that important statistical tests like chi square and serial
correlation are satisfied.
The Random class uses a random number generator of the form:
nextrand = nextrand * a + b;
where a and b are carefully chosen constants. As defined by
D. H. Lehmer and described by Donald E. Knuth in "The Art of
Computer Programming, Volume 2: Seminumerical Algorithms,"
section 3.2.1, this is a "linear congruential" random number
generator. The low-order bits of random numbers generated this
way tend not to be random, so internal calculation is done using
48 bits. But a Random method such as Random.nextInt uses only
the upper 32 bits of the current 48-bit random value.
A sequence of random values that is generated is deterministic.
This means that from a given starting point (a "seed"), the
sequence of values returned is predictable. When you set up
a random number generator, you can say:
Random rn = new Random(1234);
if you want to specify the seed (here, it's 1234), or:
Random rn = new Random();
if you want the generator to be seeded from the current time of
day (using System.currentTimeMillis). The first approach produces
a predictable sequence, and so this tip uses "Random(0)" in the
demo programs below.
The first random number program is a simple one that prints out
10 "heads" or "tails" values:
import java.util.Random;
public class RandDemo1 {
public static void main(String args[]) {
Random rn = new Random(0);
for (int i = 1; i <= 10; i++) {
System.out.println(rn.nextBoolean() ?
"heads" : "tails");
}
}
}
The nextBoolean method in RandDemo1 is implemented internally by
generating a 48-bit random value, and then checking whether the
high bit is 1 or 0.
The next example is slightly more complex:
import java.util.Random;
class RandUtils {
private static Random rn = new Random(0);
private RandUtils() {}
// get random number in range, lo <= x <= hi
public static int getRange(int lo, int hi) {
// error checking
if (lo > hi) {
throw new IllegalArgumentException("lo > hi");
}
// handle special case
if (lo == Integer.MIN_VALUE &&
hi == Integer.MAX_VALUE) {
return rn.nextInt();
}
else {
return rn.nextInt(hi - lo + 1) + lo;
}
}
// return true perc % of the time
public static boolean getPerc(int perc) {
// error checking
if (perc < 0 || perc > 100) {
throw new IllegalArgumentException("bad perc");
}
return perc >= getRange(1, 100);
}
}
public class RandDemo2 {
public static void main(String args[]) {
int accum[] = new int[10];
// generate random numbers in a range and tally them
for (int i = 1; i <= 10000; i++) {
accum[RandUtils.getRange(0, accum.length - 1)]++;
}
// display results
for (int i = 0; i < accum.length; i++) {
System.out.println(i + " " + accum[i]);
}
}
}
In this example, RandUtils is a utility class that implements
a couple of methods: getRange and getPerc. The getRange method
returns a random number in a specified range. The method is based
on Random.nextInt, which returns a random number between 0
(inclusive) and the specified argument (exclusive). What inclusive
and exclusive mean here is that if you call Random.nextInt as
follows:
Random rn = new Random();
int n = rn.nextInt(10);
n will have a value, 0 <= n < 10. In other words, 0 can be one of
the returned values, but not 10.
The other method is getPerc; it returns true the specified
percentage of the time. For example, you can say:
if (RandUtils.getPerc(75)) {
// block of code
}
and the block of code will be executed 75% of the time, on average.
You'll see a use for this method in the next example.
When you run the RandDemo2 program, you should get the following
result:
0 990
1 1011
2 952
3 1045
4 998
5 1005
6 1021
7 1009
8 1005
9 964
Note that the tally for each number in the range should be about
1000. The results in this example vary slightly from the expected
value. This is normal. If you want to check whether the variation
is statistically significant, use a chi square test. If you do,
you should find that the results observed here are well within
those expected from random fluctuations.
The final example is more complicated. Suppose that you're testing
some software, and one of the inputs to the software is calendar
dates, like this:
September 25, 1956
You'd like to generate random dates in this form, with some of the
dates being legal, and some illegal (such as "January 32, 1989").
How can you do this? One way is to use random numbers. Here's an
example:
import java.util.Random;
class RandUtils {
private static Random rn = new Random(0);
private RandUtils() {}
// get random number in range, lo <= x <= hi
public static int getRange(int lo, int hi) {
// error checking
if (lo > hi) {
throw new IllegalArgumentException("lo > hi");
}
// handle special case
if (lo == Integer.MIN_VALUE &&
hi == Integer.MAX_VALUE) {
return rn.nextInt();
}
else {
return rn.nextInt(hi - lo + 1) + lo;
}
}
// return true perc % of the time
public static boolean getPerc(int perc) {
// error checking
if (perc < 0 || perc > 100) {
throw new IllegalArgumentException("bad perc");
}
return perc >= getRange(1, 100);
}
}
class GenDate {
// names of months
private static final String months[] = {
"January", "February", "March", "April",
"May", "June", "July", "August",
"September", "October", "November", "December"
};
// days in month
private static final int days_in_month[] = {
31, 28, 31, 30,
31, 30, 31, 31,
30, 31, 30, 31
};
// return true if leap year
private static boolean isLeapYear(int year) {
if (year % 4 != 0) {
return false;
}
if (year % 400 == 0) {
return true;
}
return (year % 100 != 0);
}
// get the number of days in a given month
private static int getDaysInMonth(int month, int year) {
int m = days_in_month[month - 1];
if (month == 2 && isLeapYear(year)) {
m++;
}
return m;
}
// generate a random string
private static String getRandString() {
switch (RandUtils.getRange(1, 4)) {
// empty string
case 1: {
return "";
}
// random integer
case 2: {
return Integer.toString(
RandUtils.getRange(-100000, 100000));
}
// random characters
case 3: {
StringBuffer sb = new StringBuffer();
int n = RandUtils.getRange(1, 10);
for (int i = 1; i <= n; i++) {
char c = (char)RandUtils.getRange(32, 127);
sb.append(c);
}
return sb.toString();
}
// random number of spaces
case 4: {
StringBuffer sb = new StringBuffer();
int n = RandUtils.getRange(1, 10);
for (int i = 1; i <= n; i++) {
sb.append(' ');
}
return sb.toString();
}
}
// shouldn't get here
throw new Error();
}
// this class has only static methods, so
// can't create instances of the class
private GenDate() {}
public static String getRandDate() {
StringBuffer sb = new StringBuffer();
// generate year, month, day
int year = RandUtils.getRange(1500, 2100);
int month = RandUtils.getRange(1, 12);
int day = RandUtils.getRange(1,
getDaysInMonth(month, year));
// 50% of the time, return a valid date
if (RandUtils.getPerc(50)) {
sb.append(months[month - 1]);
sb.append(" ");
sb.append(day);
sb.append(", ");
sb.append(year);
}
else {
// generate a month or random string
if (RandUtils.getPerc(75)) {
sb.append(months[month - 1]);
}
else {
sb.append(getRandString());
}
// generate single space or random string
if (RandUtils.getPerc(75)) {
sb.append(" ");
}
else {
sb.append(getRandString());
}
// generate day of month or random number
if (RandUtils.getPerc(75)) {
sb.append(day);
}
else {
sb.append(RandUtils.getRange(-100, 100));
}
// generate , or random string
if (RandUtils.getPerc(75)) {
sb.append(", ");
}
else {
sb.append(getRandString());
}
// generate year or random string
if (RandUtils.getPerc(75)) {
sb.append(year);
}
else {
sb.append(getRandString());
}
}
return sb.toString();
}
}
public class RandDemo3 {
public static void main(String args[]) {
for (int i = 1; i <= 15; i++) {
System.out.println(GenDate.getRandDate());
}
}
}
The output of the program is:
May 21, 1778
June -83, 2006
September 51575
14, M%r
September 26, 1614
October 17, 1910
May 16, 1818
August 27, 1646
November 19, 2055
June 12, 1797
June 13, 1585
August 2, 1998
October 17,
September 14, 1545
June339628,
This technique is quite powerful and useful. Typically you start
with a description of what constitutes legal input, and then
systematically go through and generate "nearly correct" input, but
with some illegal variations thrown in, driven by random numbers.
A similar technique can be used for doing simulation.
To learn more about using random numbers, see Section 17.3 Random
in "The Java Programming Language, Third Edition," by Arnold,
Gosling, and Holmes
(http://java.sun.com/docs/books/javaprog/thirdedition/);
and Chapter 3 in Volume 2 of "The Art of Computer Programming"
by Knuth.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
COLLECTION UTILITIES
Collection classes like ArrayList and HashMap are used heavily in
Java programming. Associated with these classes are what might be
called utility methods. These methods add functionality to the
collection classes. This tip looks at some of these utilities.
The first utility method is one that provides a synchronization
wrapper on top of an existing collection. A wrapper is an
alternate view of a collection. You can still access and modify
the original collection, but the wrapped collection provides some
other desirable property.
Here's a demonstration of using synchronization wrappers:
import java.util.*;
public class UtilDemo1 {
public static void main(String args[]) {
Object obj = new Object();
// create a list
List list = new ArrayList();
// put a wrapper on top of it
List synclist = Collections.synchronizedList(list);
// add some objects to the list
long start = System.currentTimeMillis();
for (int i = 1; i <= 1000000; i++) {
synclist.add(obj);
}
long elapsed = System.currentTimeMillis() - start;
System.out.println(elapsed);
}
}
By default, collection classes such as ArrayList are not
thread-safe (unlike the older Vector class). A thread-safe
implementation does more for you, but costs more in return. If
you have multiple threads sharing a single collection, then you
need to worry about synchronization, that is, you need to be
aware of potential problems and deal with them.
In the example above, the collection is wrapped, and so, method
calls such as add will be synchronized. The synchronization is done
by first obtaining a lock on the wrapper (synclist). If you really
want to thwart synchronization, you can still access the list
directly using "list" instead of "synclist". However, this is
probably not a good idea.
Another way of tackling the same problem of adding objects to
a list looks like this:
import java.util.*;
public class UtilDemo2 {
public static void main(String args[]) {
Object obj = new Object();
// create a list
List list = new ArrayList();
// add some objects to it
long start = System.currentTimeMillis();
synchronized (list) {
for (int i = 1; i <= 1000000; i++) {
list.add(obj);
}
}
long elapsed = System.currentTimeMillis() - start;
System.out.println(elapsed);
}
}
In this example, no synchronization wrapper is used, but the list
is locked while objects are added to it. This demo runs about 25%
faster than the previous one, but at the expense of keeping the
list locked throughout the update operation.
The Collection class also has methods that return unmodifiable
(as opposed to synchronized) wrappers. One of these methods is
Collections.unmodifiableList; you can use it to create a read-only
view of a list. The list can still be modified, but not through
the wrapper interface. This is especially useful if you want to
pass a list to some other function, but you want to prevent the
other function from modifying the list. To do this, you simply
use a lightweight wrapper to make the list read only.
Here's an example that uses Collections.unmodifiableList:
import java.util.*;
public class UtilDemo3 {
public static void main(String args[]) {
// create a list and add some items to it
List stringlist = new ArrayList();
stringlist.add("alpha");
stringlist.add("beta");
stringlist.add("gamma");
// create an unmodifiable view of the list
List rolist = Collections.unmodifiableList(stringlist);
// add to the original list (works OK)
stringlist.add("delta");
// add through the read-only view (gives an exception)
rolist.add("delta");
}
}
This example program creates a list and adds some items to it. It
then creates an unmodifiable view of the list. When you run the
program, you'll see that an additional item can be added to the
original list. However the program throws an exception when it
attempts to add an item to the read-only view.
Another kind of operation provided by the utility methods is
min/max. Here's an example using min:
import java.util.*;
public class UtilDemo4 {
public static void main(String args[]) {
// create a list and add some objects to it
List list = new ArrayList();
list.add("alpha");
list.add("Beta");
list.add("gamma");
list.add("Delta");
// compute the minimum element, case sensitive
String str = (String)Collections.min(list);
System.out.println(str);
// compute the minimum element, case insensitive
str = (String)Collections.min(list,
String.CASE_INSENSITIVE_ORDER);
System.out.println(str);
}
}
This program computes the minimum value of a set of strings, using
the natural ordering of strings (see java.lang.Comparable). Then
it computes the minimum value using an implementation of the
java.util.Comparator interface. In this example, a special
comparator String.CASE_INSENSITIVE_ORDER is used. A comparator
allows you to specify a particular ordering of elements. The output
of the program is:
Beta
alpha
You can use the shuffle utility method to randomly shuffle the
elements of a list. For example, this program reads a text file and
then displays its lines in random order:
import java.io.*;
import java.util.*;
public class UtilDemo5 {
public static void main(String args[]) throws IOException {
// check command line argument
if (args.length != 1) {
System.err.println("missing input file");
System.exit(1);
}
// open file
FileReader fr = new FileReader(args[0]);
BufferedReader br = new BufferedReader(fr);
// read lines from file
List list = new ArrayList();
String ln;
while ((ln = br.readLine()) != null) {
list.add(ln);
}
br.close();
// shuffle the lines
Collections.shuffle(list);
// print the result
Iterator it = list.iterator();
while (it.hasNext()) {
System.out.println((String)it.next());
}
}
}
For input like:
1
2
3
4
5
output might be:
3
2
1
5
4
A program like this one is useful in generating test data.
A final example shows how to do binary searching in a list:
import java.util.*;
public class UtilDemo6 {
public static void main(String args[]) {
// create list and add elements to it
List list = new ArrayList();
list.add("alpha");
list.add("Beta");
list.add("Delta");
list.add("gamma");
// do the search
int i = Collections.binarySearch(list, "chi",
String.CASE_INSENSITIVE_ORDER);
i = -(i + 1);
// display the result
System.out.println("insertion point = " + i);
}
}
The list is searched (case insensitive) for an occurrence of the
string "chi", which is not found. When a key is not found, the
return value from binarySearch is -(i + 1), where i is the
appropriate insertion point to maintain the list in proper order.
When run, the UtilDemo6 program prints:
insertion point = 2
In other words, "chi" should be inserted at location 2, just
before "Delta".
The collection utilities also contain methods for sorting lists,
reversing the order of lists, filling, and copying.
To learn more about collection utilities, see section 16.8
Wrapped Collections and the Collections Class in "The Java
Programming Language, Third Edition" by Arnold, Gosling, and
Holmes (http://java.sun.com/docs/books/javaprog/thirdedition/).
. . . . . . . . . . . . . . . . . . . . . . .
October 31, 2000. This issue is about class loaders. When
a Java program doesn't load successfully, developers usually
suspect the class path. In reality, the class path is a special
case of a powerful class loading architecture built around the
java.lang.ClassLoader class. This tip covers:
* Class loaders as a namespace mechanism
* Relating class loaders to the class path
* Using class loaders for hot deployment
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt1027.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CLASS LOADERS AS A NAMESPACE MECHANISM
Program developers must cope with the problem of name collisions.
This is true for all programming environments. If you name a class
"BankAccount," there is a good chance that somebody else will
use the same name for another class. Sooner or later the two
classes will collide in the same process, wreaking havoc.
Programmers who begin to use the Java(tm) programming language
learn to use packages to prevent name collisions. Instead of naming
a class "BankAccount," you place the class in a package, perhaps
naming the class something like com.develop.bank.BankAccount (that
is, the reverse of your domain name). Hopefully there is minimal
danger of a name collision with this approach.
However with a language as dynamic as the Java programming
language, "minimal danger" is not safe enough. The package naming
scheme relies on the cooperation of all developers in a system.
This is difficult to coordinate and is error-prone. Imagine, for
example, the confusion that would result if the European and
American branches of a company each created a
"com.someco.FootballPlayer" class! More importantly, Java
applications can run for a long time--long enough that you might
recompile and ship a new version of a class without ever shutting
down the application. This leads to multiple versions of the same
class trying to live in the same application.
The Java(tm) Virtual machine* handles these problems through its
class loader architecture. Every class in an application is loaded
by an associated ClassLoader object. How does this solve the name
collision problem? The VM treats classes loaded by different
class loaders as entirely different types, even if their packages
and names are exactly the same. Here's a simple example:
import java.net.*;
public class Loader {
public static void main(String [] args)
throws Exception
{
URL[] urlsToLoadFrom = new URL[]{new URL("file:subdir/")};
URLClassLoader loader1 = new URLClassLoader(urlsToLoadFrom);
URLClassLoader loader2 = new URLClassLoader(urlsToLoadFrom);
Class cls1 = Class.forName("Loadee", true, loader1);
Class cls2 = Class.forName("Loadee", true, loader2);
System.out.println("(cls1 == cls2) is "
+ ((cls1 == cls2) ? "true" : "false"));
}
}
//place Loadee in a subdir named 'subdir'
public class Loadee {
static {
System.out.println("Loadee class loaded");
}
}
Compile the Loader class, then compile the Loadee class in a
subdirectory named "subdir." When you run the Loader class, you
should see the output:
Loadee class loaded
Loadee class loaded
(cls1 == cls2) is false
Both cls1 and cls2 are named Loadee. In fact, they both come from
the same .class file (although that does not need to be the
case, in general). Nevertheless, the VM treats cls1 and cls2 as two
separate classes.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
RELATING CLASS LOADERS TO THE CLASS PATH
The example above is interesting, but most developers do not code
that way. Instead of using the reflection method Class.forName()
to load classes into the VM, they simply write code such as:
//load a Foo (don't bother me about where it came from...)
Foo f = new Foo();
In order for this to compile and run, the Foo class must be in the
class path (or in a few other special locations beyond the scope
of this tip). The class path is the place where most developers
interact with class loaders, although implicitly. But from the
perspective of the VM, the class path is just a special case of
the class loader architecture.
When a Java application starts, it creates a class loader that
searches a set of URLs. The application initializes the class
loader to use file URLs based on the values in the class path.
Assuming that the application's main class is in the class path,
the main class is loaded and begins executing. After that, class
loading is implicit. Whenever a class refers to another class, for
example, by initializing a field or local variable, the referent
is immediately loaded by the same class loader that loaded the
referencing class. Here's an example:
//compile these classes all in one file ReferencingClass.java
class Referent {
static {
System.out.println("Referent loaded by " +
Referent.class.getClassLoader());
}
}
public class ReferencingClass {
public static void main(String [] args) {
System.out.println("ReferencingClass loaded by " +
ReferencingClass.class.getClassLoader());
//refer to Referent
new Referent();
System.out.println("String loaded by " +
String.class.getClassLoader());
}
}
When you compile and run the ReferencingClass, you should see
that both ReferencingClass and Referent are loaded by the same
class loader. (If you are using the JDK it will be a nested class
of sun.misc.Launcher.) However, the String class is loaded by a
different class loader. In fact the String class is loaded by the
"null" class loader, even though String is also referenced by
ReferencingClass. This is an example of class loader delegation.
A class loader has a parent class loader, and the set of a class
loader and its ancestors is called a delegation.
Whenever a class loader loads a class, it must consult its parent
first. A standard Java application begins with a delegation of
three class loaders: the system class loader, the extension class
loader, and the bootstrap class loader. The system class loader
loads classes from the class path, and delegates to the extension
class loader, which loads Java extensions. The parent of the
extension class loader is the the bootstrap class loader, also
known as the null class loader. The bootstrap class loader loads
the core API.
The delegation model has three important benefits. First, it
protects the core API. Application-defined class loaders are not
able to load new versions of the core API classes because they
must eventually delegate to the boostrap loader. This prevents the
accidental or malicious loading of system classes that might
corrupt or compromise the security of the VM. Second, the
delegation model makes it easy to place common classes in a shared
location. For example, in a servlet engine the servlet API classes
could be placed in the class path where they can be shared. But
the actual servlet implementations might be loaded by a separate
URLClassLoader so that they can be reloaded later. Third, the
delegation model makes it possible for objects loaded by different
class loaders to refer to each other through superclasses or
superinterfaces that are loaded by a shared class loader higher in
the delegation.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING CLASS LOADERS FOR HOT DEPLOYMENT
The ability to load mutiple classes with the same name into the
virtual machine allows servers to partition processing into
separate namespaces. This partitioning could be space-based,
separating code from different sources to simplify security. For
example, applets from two different codebases could run in the
same browser process. Or, the partitioning could be time-based.
Here, new versions of a class could be loaded as they become
available. This time-based partitioning feature is sometimes
known as hot deployment. The following code demonstrates hot
deployment in action.
//file ServerItf.java
public interface ServerItf {
public String getQuote();
}
//file Client.java
import java.net.URL;
import java.net.URLClassLoader;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Client {
static ClassLoader cl;
static ServerItf server;
public static void loadNewVersionOfServer() throws Exception {
URL[] serverURLs = new URL[]{new URL("file:server/")};
cl = new URLClassLoader(serverURLs);
server = (ServerItf) cl.loadClass("ServerImpl").newInstance();
}
public static void test() throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
loadNewVersionOfServer();
while (true) {
System.out.print("Enter QUOTE, RELOAD, GC, or QUIT: ");
String cmdRead = br.readLine();
String cmd = cmdRead.toUpperCase();
if (cmd.equals("QUIT")) {
return;
} else if (cmd.equals("QUOTE")) {
System.out.println(server.getQuote());
} else if (cmd.equals("RELOAD")) {
loadNewVersionOfServer();
} else if (cmd.equals("GC")) {
System.gc();
System.runFinalization();
}
}
}
public static void main(String [] args) {
try {
test();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
//file ServerImpl.java. Place this file in a subdirectory named 'server'.
class Reporter {
Class cls;
Reporter(Class cls) {
this.cls = cls;
System.out.println("ServerImpl class " + cls.hashCode() + " loaded into VM");
}
protected void finalize() {
System.out.println("ServerImpl class " + cls.hashCode() + " unloaded from VM");
}
}
public class ServerImpl implements ServerItf {
//catch the class being unloaded from the VM
static Object reporter = new Reporter(ServerImpl.class);
public String getQuote() {
return "A rolling stone gathers no moss";
}
}
Compile the Client and ServerItf files in a directory, and then
compile the ServerImpl in a subdirectory named "server." Make
sure to include the higher level directory on your class path
when you compile ServerImpl, for example:
javac -classpath \mypath\ServerImpl.java
When you start the Client, you should see the following prompt:
Enter QUOTE, RELOAD, GC, or QUIT:
Enter QUOTE to see the current quote from the server:
A rolling stone gathers no moss
Now, without shutting down the process, use another console or GUI
to edit the ServerImpl class. Change the getQuote method to return
a different quote, for example, "Wet birds do not fly at night."
Recompile the server class. Then return to the console where the
Client is still running and enter RELOAD. This invokes the method
loadNewVerionOfServer(), which uses a new instance of
URLClassLoader to load a new version of the server class. You
should see something like this:
ServerImpl class 7434986 loaded into VM
Reissue the QUOTE command. You should now see your new version of
the quote, for example:
A rolling stone gathers no moss
Notice that you did this without shutting down your application.
This same technique is used by servlet engines such as Apache
Software Foundation's Tomcat to automatically reload servlets
that have changed.
There are a few interesting points about explicitly using class
loaders in your application. First, instances of Class and
ClassLoader are simply Java objects, subject to the normal memory
rules of the Java(tm) platform. In other words, when classes and
class loaders are no longer referenced, they can be reclaimed by
the garbage collector. This is important in a long-running
application, where unused old versions of classes could waste
a lot of memory. However, making sure that classes are
unreferenced can be tricky. Every instance has a reference to its
associated class, every class has a reference to its class loader,
and every class loader has a reference to every class it ever
loaded. It's easy to view this tangled knot of references as a
class loader "hairball." If you have a reference to any object in
the hairball, none of the objects can be reclaimed by the garbage
collector. In the simple Client application above, you can verify
by inspection that all references to the old class loader and its
classes are explicitly dropped when a new class loader is created.
If you need more proof, you can issue the GC command from the
Client console. On a VM with a reasonably aggressive GC
implementation, you should see a log message indicating that the
old ServerImpl class has been reclaimed by the garbage collector.
Notice that the Client code never refers to ServerImpl directly.
Instead, the ServerImpl instance is held in a reference of type
ServerItf. This is critical to making explicit use of class
loaders. Remember the rules about implicit class loading, and
imagine what would happen if the Client had a field of type
ServerImpl. When the VM needs to initialize that field, it uses
the Client's class loader to try to load ServerImpl. Client is the
application main class, so it is loaded from the class path by the
system class loader. Because the ServerImpl class is not on the
class path, the reference to it causes a NoClassDefFoundError.
Don't be tempted to "fix" this by placing ServerImpl in the class
path. If you do that, the ServerImpl class will indeed load, but
it will load under control of the system class loader. This
defeats hot deployment because your instances of URLClassLoader
delegate to the system class loader. So, no matter how many times
you create a URLClassLoader, you always get the copy of ServerImpl
that was originally loaded from the class path.
For more information on class loaders, see:
- Book: "Inside The Java Virtual Machine," by Bill Venners.
- Article: "A New Era for Java Protocol Handlers," by Brian Maso
http://java.sun.com/jdc/onlineTraining/protocolhandlers/
- White paper: "Understand Class.forName()", by Ted Neward
http://www.javageeks.com/Papers/ClassForName/
. . . . . . . . . . . . . . . . . . . . . . .
October 10, 2000. This issue covers:
* Customizing JToolTips
* Shadowing
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt1010.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CUSTOMIZING JTOOLTIPS
JToolTip is a Swing class that you use to provide a tip for
a Swing component. When the mouse cursor is moved over the
component, a short text message is displayed describing the
function of the component.
It's easy to set a tip for a component; you just say:
comp.setToolTipText("tip text");
Let's look at a couple of ways of customizing tool tips, in the
context of the following application:
import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
import javax.swing.*;
// a customized label that displays a color fade image
class ColorLabel extends JLabel {
private static final int WIDTH = 100; // label width
private static final int HEIGHT = 100; // label height
private static final int SZ = 20; // size of tip area
private static Image img; // generated image for label
private static ImageIcon icon; // ImageIcon for the image
// generate a color fade image
// adapted from 1.3 java/awt/image/MemoryImageSource.java
static {
// generate the pixel array
int pixels[] = new int[WIDTH * HEIGHT];
int index = 0;
for (int y = 0; y < HEIGHT; y++) {
int red = (y * 255) / (HEIGHT - 1);
for (int x = 0; x < WIDTH; x++) {
int blue = (x * 255) / (WIDTH - 1);
pixels[index++] = (255 << 24) |
(red << 16) | blue;
}
}
// generate the actual image from the pixels
img = Toolkit.getDefaultToolkit().createImage(
new MemoryImageSource(WIDTH, HEIGHT, pixels,
0, WIDTH));
icon = new ImageIcon(img);
}
// an inner class, objects of which represent one
// customized tooltip with bounding box and text specified
static class Tip {
Rectangle rect;
String text;
Tip(Rectangle r, String t) {
rect = r;
text = t;
}
};
// the list of custom tooltips
static Tip tips[] = {
new Tip(new Rectangle(0, 0, SZ, SZ),
"Black Part"),
new Tip(new Rectangle(WIDTH - SZ, 0, SZ, SZ),
"Blue Part"),
new Tip(new Rectangle(0, HEIGHT - SZ, SZ, SZ),
"Red Part"),
new Tip(new Rectangle(WIDTH - SZ, HEIGHT - SZ, SZ, SZ),
"Pink Part"),
};
// constructor for ColorLabel
// set the label image and the default tooltip text
public ColorLabel() {
super(icon);
setToolTipText("Color Fade Example");
}
// override of JComponent.getToolTipText to support
// custom tooltips based on the mouse position
public String getToolTipText(MouseEvent e) {
// get mouse position
Point p = e.getPoint();
// see if it's in any of the custom tooltip
// bounding boxes
for (int i = 0; i < tips.length; i++) {
if (tips[i].rect.contains(p)) {
return tips[i].text;
}
}
// if not, return default
return getToolTipText();
}
}
public class ToolTipDemo {
public static void main(String args[]) {
// set up the frame and the window closing event handler
JFrame frame = new JFrame("ToolTipDemo");
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
// create an Exit button with a customized
// tooltip that uses an italicized font
JButton button = new JButton("Exit") {
public JToolTip createToolTip() {
JToolTip t = super.createToolTip();
t.setFont(new Font("TimesRoman",
Font.ITALIC, 16));
return t;
}
};
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
System.exit(0);
}
});
button.setToolTipText("Terminate the application");
// set up the panel
JPanel panel = new JPanel();
panel.add(new ColorLabel());
panel.add(button);
// display the frame
frame.getContentPane().add(panel);
frame.setSize(200, 150);
frame.setLocation(300, 200);
frame.setVisible(true);
}
}
This program draws a color fade box on the screen. A color fade is
a gradual change from one color to another, for example from black
to blue across the top of the box. The color fade example is
adapted from that found in the comments in
java/awt/image/MemoryImageSource.java for JDK 1.3.
The color fade is calculated into a pixel array, which is then
used to construct the Image object. An ImageIcon is then formed
from the image. The ImageIcon is used to set the icon for the
JLabel object that represents the box. There's also an Exit button
drawn next to the box.
The first type of tooltip customization is for the Exit button.
The text of the tip is changed to a 16-point italicized Times Roman
font. The program does this by overriding JComponent.createToolTip.
Notice that the overriding method calls the superclass's
createToolTip method to get the tip object; the overriding method
then sets the font for the object.
The other kind of customization is more sophisticated. If you have
an application with a complex GUI component in it, it would be nice
to customize tooltips based on the position of the mouse within the
component.
To do this, you can override JComponent.getToolTipText(MouseEvent).
By default, this method simply returns the text that was set with
setToolTipText. But you can specify your own version of the method,
and obtain the mouse cursor position; you can then return custom
text based on the position.
The example program above sets a general tip "Color Fade Example"
for the color fade box. Then the program calls getToolTipText to get
the mouse position. getToolTipText also checks whether the mouse is
in any of the four corners of the box. A corner is defined to be 20
x 20 pixels. If the mouse is in one of the corners, a custom tip
such as "Blue Part" is displayed.
Other types of tooltip customization are possible, for example,
you can set a preferred location for the display of a tooltip.
For more information about tooltips, see the "Tooltips" section
in Chapter 4 of "Graphic Java - Mastering the JFC 3rd Edition,
Volume II Swing" by David Geary.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
SHADOWING
Suppose you're reading some Java code, and you come across
something like this:
class A {
int A = 37;
A() {
int A = 47;
A aref = new A() {
int A = 57;
void A() {}
};
}
}
This usage is legal, but not necessarily desirable. In fact, it
raises an interesting question about how the Java programming
language specification treats conflicting names. There are
several terms used in this area to describe various cases:
shadowing, overriding, hiding, and obscuring. This tip looks at
an example of each of these.
First an important point needs to be made: just because the
Java programming language allows you to do something, it doesn't
always mean that it's a desirable thing to do. For example, it's
legal to say:
class A {
int A;
}
in a program, but you probably shouldn't because it's confusing.
The best way to handle issues with conflicting names is to simply
avoid them as far as possible. For example, you can avoid many
problems if you follow a coding convention that specifies that
the first letter of a type name (such as "class A") should be
capitalized, while the first letter of a field name (such as
"int A") should be lowercase.
Now let's look at an example of shadowing:
public class Shadow {
int a;
int b;
// parameters a/b shadow instance variables a/b
public Shadow(int a, int b) {
// set parameter equal to itself
a = a;
// set instance variable b equal to parameter b
this.b = b;
}
public static void main(String args[]) {
Shadow s = new Shadow(37, 47);
System.out.println("a = " + s.a);
System.out.println("b = " + s.b);
}
}
When your run Shadow, you should see:
a = 0
b = 47
One place shadowing comes up is when you have field names and
parameter names that are the same, and you want to use the
parameters to set the fields:
int a;
public void f(int a) {
a = a;
}
This doesn't work, because the parameter "a" shadows the field "a",
that is, the parameter name blocks access via a simple name to the
field name. You can get around this problem by saying:
this.a = a;
which means "set field a to parameter a". Whether this style of
usage is desirable or not depends on your particular biases; one
point in its favor is that you don't have to invent parameter names
like "a1" or "_a".
The second example is one that illustrates overriding:
class A {
void f() {
System.out.println("A.f");
}
}
public class Override extends A {
// instance method f overrides instance method A.f
void f() {
System.out.println("Override.f");
}
void g() {
// call Override.f
f();
// call A.f
super.f();
}
public static void main(String args[]) {
Override o = new Override();
o.g();
}
}
When you run Override, you should see:
Override.f
A.f
In this example, the method Override.f overrides the method A.f.
If you have an object of type Override, and call f, Override.f
is called. However if you have an object of type A, A.f is called.
This approach is a standard part of object-oriented programming.
For example, java.lang.Object declares a hashCode method, but
subclasses, such as String, provide an overriding version of the
method. The overriding version is tailored to the particular type
of data represented by the class.
You can call the superclass method by using the notation:
super.f();
A third example is that of hiding:
class A {
static void f() {
System.out.println("A.f");
}
void g() {
System.out.println("A.g");
}
}
public class Hide extends A {
static void f() {
System.out.println("Hide.f");
}
void g() {
System.out.println("Hide.g");
}
public static void main(String args[]) {
A aref = new Hide();
// call A.f()
aref.f();
// call Hide.g()
aref.g();
}
}
When you run Hide, you should see:
A.f
Hide.g
In this example, Hide.f hides A.f, and Hide.g overrides A.g.
One way of seeing the difference between hiding and overriding
is to note that overriding applies to regular instance methods;
the actual method that is called is determined at run time
based on the type of the object (a so-called "virtual function").
This sort of dynamic lookup does not happen for static methods
or for fields. For example, in this code:
class A {
int x = 37;
void f() {
System.out.println("A.f");
}
}
public class Lookup extends A {
int x = 47;
void f() {
System.out.println("Lookup.f");
}
public static void main(String args[]) {
A aref = new Lookup();
// call Lookup.f
aref.f();
// display A.x
System.out.println(aref.x);
}
}
the method reference through "aref" results in Lookup.f being
called, but the field reference obtains A.x. Or to say it another
way, the actual class of an object determines which instance method
is called. But for fields, the type of the reference is used
(here it's aref, of type A). When you run Lookup, you should see:
Lookup.f
37
The final example illustrates the idea of obscuring:
class A {
static int MIN_PRIORITY = 59;
};
public class Obscure {
static A Thread;
public static void main(String args[]) {
// print value of class variable Thread.MIN_PRIORITY
System.out.println(Thread.MIN_PRIORITY);
// print value of java.lang.Thread.MIN_PRIORITY
System.out.println(java.lang.Thread.MIN_PRIORITY);
}
}
When you run Obscure, you should see:
59
1
Consider the first print statement in this example, that prints:
Thread.MIN_PRIORITY
There are two possible meanings for this expression: either the
static field MIN_PRIORITY in the class java.lang.Thread, or the
static field MIN_PRIORITY in the class variable Thread in class
Obscure.
The Java language specification says that in this situation,
variables are chosen in preference to types. So the static field
in the class variable Thread is printed. You can work around this
by fully qualifying the class name Thread, as the example shows:
java.lang.Thread.MIN_PRIORITY
This code example is very sneaky, and represents a poor coding
style.
For more information about shadowing, see section 6.3.2,
"Obscured Declarations," section 7.5.2, "Type-Import-on-Demand
Declaration," section 8.4.6, Inheritance, Overriding, and Hiding,"
section 8.4.8.5, "Example: Invocation of Hidden Class Methods,"
and section 14.4.3, "Shadowing of Names by Local variables"
in "The Java Language Specification Second Edition" by Gosling,
Joy, Steele, and Bracha (http://java.sun.com/docs/books/jls/).
. . . . . . . . . . . . . . . . . . . . . . .
September 26, 2000. This issue is about the class
java.lang.SecurityManager. This class is the backbone of
context-based security in the Java(tm) platform. The
SecurityManager class acts as a single point of control for
potentially unsafe operations such as deleting a file; it
decides whether the operations can proceed based on context.
This issue covers:
* Using SecurityManager
* Policies and the Policy File
This tip was developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://java.sun.com/jdc/TechTips/2000/tt0926.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING SECURITYMANAGER
The basic SecurityManager architecture is simple. Throughout the
JDK, the Java security team had to:
o Identify operations in the code that might pose a security
risk.
o Find places in the code where checks could be placed to guard
these operations (but do so with the smallest number of
bottlenecks).
o Throw an exception if the caller is not allowed to proceed.
This is how the SecurityManager class is used in the JDK source.
For example, writing to a file on a user's local hard drive is an
operation that needs to be secured. All file writes must at some
point involve a FileOutputStream constructor. So you should expect
to find a security checkpoint there:
//from the JDK 1.3 source...
public FileOutputStream(String name, boolean append)
throws FileNotFoundException
{
SecurityManager security = System.getSecurityManager();
if (security != null) {
security.checkWrite(name);
}
//go on and actually construct the object
This is a representative example of the security checks you find
throughout the JDK. Before the actual work of the constructor
begins, there is a check with the System class to see if a
security manager is installed. If there is one, the constructor
calls an appropriate method on the security manager, passing in
any additional information that might influence the outcome. In
the case of writing to a file, the relevant method is
checkWrite() and the extra information is the name of a file.
Because the hooks are already in place throughout the JDK, you
can customize security by writing your own subclass of
SecurityManager. Here is a simple example that only permits
writing to a file named "temp" in the current directory.
import java.io.*;
class TempfileSecurityManager extends SecurityManager {
public void checkWrite(String name) {
if (!("temp".equals(name))) {
throw new SecurityException("Access to '" + name + "' denied");
}
}
}
public class TestSecurityManager {
public static void writeFile(String name) throws IOException {
System.out.println("Writing to file " + name);
FileOutputStream fos = new FileOutputStream(name);
//write something here...
fos.close();
}
public static void main(String[] args) throws IOException {
System.setSecurityManager(new TempfileSecurityManager());
writeFile("temp");
writeFile("other");
}
}
The TestSecurityManager class installs a TempfileSecurityManager
through the System.setSecurityManager method. If you run
TestSecurityManager, you should see that the writeFile method
works fine when the file passed in is named "temp" but fails
when "other" is passed in as the filename.
The TempfileSecurityManager is simple, but it has a major
weakness. A particular capability is either granted to all the
code running in the VM*, or not granted at all. Real systems
need to assign different abilities to different pieces of code
running in the same VM. For example, it would be nice to have
a logging facility that could write to a logfile, but prevent
any other code from writing to the local file system. The
TempfileSecurityManager cannot handle this because it only looks
at the filename being opened. A better implementation would also
look at the context in which the file is opened.
The SecurityManager base class provides the needed context
information. The protected method getClassContext() returns an
array of all the classes currently on the callstack. This enables
a security manager to examine all the classes and decide if they
should be trusted to perform the operation in question. For
example, the following callstack array could be trusted:
Class java.io.FileOutputStream
Class com.develop.log.EventLog
etc.
But the following callstack array will probably not be trusted.
Class java.io.FileOutputStream
Class org.fierypit.EvilApplet
etc.
Of course, the perpetrators of evil will not normally indicate
their intent by naming a class "EvilApplet." So more work is
necessary. For each class on the callstack, a security manager
implementation could call getClassLoader to determine the
class loader for the class. Given smart implementations of a
class loader such as the JDK's java.net.URLClassLoader, it would
then be possible to determine where on the web a class came from,
and even check its digital signature.
At this point implementing your own security manager is starting
to sound like a lot of work. The checkWrite() method shown above
is only one of several dozen methods that you might need to
implement. Others cover operations such as accessing the network,
accessing system properties, and invoking native code methods.
For every one of these methods, a security manager needs to analyze
the callstack returned by getClassContext. For each class on the
stack, it might be necessary to collaborate with a class loader to
determine the class's origin. Even worse, the code can be tricky
to write and debug. In JDK 1.1, subclassing SecurityManager was
the only way to do context-based security, and because it was so
difficult, only a few people wrote security managers.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
POLICIES AND THE POLICY FILE
What JDK(tm) 1.1 needed was a security system that was declarative
instead of procedural; in other words, a system where application
developers and system administrators describe what security
settings they want instead of how to implement them.
JDK(tm) 1.2 and later provide declarative, policy-based security
through a new class java.security.AccessController.
AccessController and related classes build on the pre-existing
SecurityManager. You can still write your own security manager,
but if you choose to rely on the new, policy-based security,
you do not have to write any code. Starting with JDK 1.2,
SecurityManager is a concrete class that delegates to the
AccessController to implement a fine-grained, context-based
security policy. Sun Microsystems provides a reference
implementation of this policy that is controlled by a text file
called the policy file.
To see a policy file in use, examine the following variation of
the TestSecurityManager class:
import java.io.*;
public class TestSecurityManager {
public static void writeFile(String name) throws IOException {
System.out.println("Writing to file " + name);
FileOutputStream fos = new FileOutputStream(name);
//write something here...
fos.close();
}
public static void main(String[] args) throws IOException {
writeFile("temp");
writeFile("other");
}
}
This version of the class is different in that is does not call
System.setSecurityManager. So, the class should run without
security checks and write to both the "temp" and "other" files.
To enable 1.2 security, you can either use setSecurityManager
to install an instance of the SecurityManager class, or specify
the following property on the command line:
java -Djava.security.manager TestSecurityManager
By default, the permissions granted to your local code are minimal.
So you should see an AccessControlException when trying to access
the"temp" file:
java.security.AccessControlException: access denied
(java.io.FilePermission temp write)
In order to enable writing to the temp file, you need to specify a
policy in a policy file, which might look like this:
//file my.policy
grant {
permission java.io.FilePermission "temp", "write";
};
You can instruct the virtual machine to use this policy file by
specifying the java.security.policy property:
java -Djava.security.manager
-Djava.security.policy=my.policy
TestSecurityManager
With this command line, you should be able to write to the "temp"
file, but not to the "other" file. Notice that this new solution
provides the same capability as the custom TempfileSecurityManager
class. However, you didn't have to write any Java code to use the
policy file. The only work was making the correct settings in the
policy file and on the command line. While not foolproof, this
declarative approach is far less prone to error than coding it
yourself.
The simple example above only begins to show the capabilities of
the policy file. More generally, the syntax of a grant block in
a policy file looks like this:
grant [codeBase "URL"] {
permission permissionClassName "target", "action";
//...
};
JDK 1.2 includes permission classes for all of the security hooks
in the virtual machine. So, for example, you could enable
connecting to any machine's HTTP port with the following entry:
grant {
permission java.io.SocketPermission "*:80", "connect";
};
The asterisk in the target string "*:80" is a wildcard for the
machine address, so the connect action is allowed to target port
80 of any machine.
By default, grant entries apply to all the classes running in the
JVM. As mentioned before, it is important to have a way to divide
classes into different protection domains, each with their
own set of permissions. The optional codeBase field accomplishes
this by limiting the grant to classes loaded from a specific URL.
Consider the following policy file:
grant codeBase "file:." {
permission java.security.AllPermission;
}
grant codeBase "http://www.develop.com/TrustWorthyApplets/" {
permission java.io.SocketPermission "*:80", "connect";
}
The first grant entry uses a file URL to give classes from the
current directory the special permission "AllPermission." This
permission basically disables security checks, and is useful only
for very trusted code. In this example the trusted code is in the
current directory (presumably you wrote that code yourself). The
second entry uses an HTTP URL to specify that applets downloaded
from a specific website can connect to any machine's HTTP port.
The codeBase field makes it easy to configure fine-grained access
control, without writing any code. This flexible control is
essential for distributed systems built with higher level
technologies such as RMI, JINI, or EJB.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
The JDK 1.2 security architecture supports several interesting
capabilities not covered here, including digital signing, custom
permissions, custom policy implementations, and privileged scopes.
For more information on these security features, examine the
security documentation at:
http://java.sun.com/j2se/1.3/docs/guide/security/index.html
Java supports user-based security through the Java
Authentication and Authorization Service (JAAS). For
information about JAAS, see:
http://java.sun.com/products/jaas/
For a comprehensive description of security in the Java 2
Platform, see the book "Inside Java 2 Platform Security:
Architecture, API Design, and Implementation" by Li Gong
(http://java.sun.com/docs/books/security/index.html).
. . . . . . . . . . . . . . . . . . . . . . .
September 12, 2000. This issue covers:
* Using Class Methods and Variables
* Using Progress Bars and Monitors in Java GUI
Applications
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt0912.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING CLASS METHODS AND VARIABLES
Suppose that you're designing a Java class to do some type of time
scheduling. One of the things you need within the class is a list
of the number of days in each month of the year. Another thing you
need is a method that determines if a given calendar year (like
1900) is a leap year. The features might look like this:
class Schedule {
private int days_in_month[] =
{31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
private boolean isLeapYear(int year) {
if (year % 4 != 0) {
return false;
}
if (year % 400 == 0) {
return true;
}
return (year % 100 != 0);
}
}
Implementing the class in this way will work, but there's a better
way to structure the list and the method.
Consider that a table of the number of days in each month is a
fixed set of values that does not change. In other words, January
always has 31 days. In the class above, each instance (object) of
Schedule will contain the same table of 12 values.
This duplication is wasteful of space, and it gives the false
impression that the table is somehow different in each object,
even though it's not. A better way to structure the table is like
this:
private static final int days_in_month[] =
{31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
The "final" keyword means that the table does not change after
initialization, and the "static" keyword means that there is
a single copy of the table shared across all instances of the
class. A variable declared this way is called a "class variable"
or "static field," as contrasted with "instance variable."
In a similar way, consider that the isLeapYear method does not
actually do anything with object-specific data. It simply accepts
an integer parameter representing a year, and returns a true/false
value. So it would be better to say:
private static boolean isLeapYear(int year) {
if (year % 4 != 0) {
return false;
}
if (year % 400 == 0) {
return true;
}
return (year % 100 != 0);
}
This is an example of a "class method" or "static method".
There are several interesting points to note about class methods
and variables, some of them obvious, some not. The first point is
that a class variable exists even if no instances of the class
have been created. For example, if you have:
class A {
static int x;
}
You can say:
int i = A.x;
in your program, whether or not you have created any A objects.
Another point is that class methods do not operate on specific
objects, so there's no "this" reference within such methods:
class A {
static int x = 37;
int y = 47;
static void f() {
A aref = this; // invalid
int i = x; // valid
int j = y; // invalid
g(); // invalid
}
void g() {
A aref = this; // valid
int i = x; // valid
int j = y; // valid
f(); // valid
}
}
The invalid cases in the previous example are ones that require
an object. For example, "y" is an instance variable within objects
of type A. Because the static method f does not operate on a
particular object of A, there's no y field to access.
When you access a static field outside of its class, you need to
qualify it with the class name:
class A {
public static int x = 37;
}
class B {
static int i = A.x;
}
A less desirable but legal form of qualification looks like this:
class A {
public static int x = 37;
}
class B {
A aref = new A();
int i = aref.x;
}
This usage gives the false impression that "x" is an instance
variable of an object of A. It's possible to take such usage even
further, and say:
class A {
public static int x = 37;
}
class B {
A aref = null;
int i = aref.x;
}
This usage is legal and will not trigger an exception; since x is
not an instance variable, there is no actual need to access the
object referenced by aref.
Given these details of how class methods and variables work, where
would you want to use them? One type of usage was illustrated
above with the Schedule class -- you have some common data shared
by all objects of a particular type. Or perhaps you have some
methods that operate only on class variables. Maybe the methods
don't operate on class data at all, but are somehow related to the
function of the class; the isLeapYear method illustrates this form
of usage.
You can also use class methods and variables as a packaging
technique. Suppose you have some legacy code that you would like
to convert. Imagine that the code uses some global variables.
Typically you don't want to use global variables (it's impossible
to do so with the Java language). But you'd like to come up with
an equivalent, to help the conversion process along. Here's one
way you can structure the code:
public class Globals {
private Globals() {}
public static int A = 1;
public static int B = 2;
public static int C = 3;
}
Using this class, you have three "pseudo globals" with names
Globals.A, Globals.B, and Globals.C, which you can use throughout
your application. The private constructor for Globals emphasizes
that the class is being used simply as a packaging vehicle. It's
not legal to actually create instances of the class.
This particular structuring technique is not always desirable,
because it's easy to change field values from all over your code.
An alternative approach is to make the static fields private, and
allow changes to them only through accessor methods. Using this
approach, you can more readily trap field changes. Here's an
example:
public class Globals {
private Globals() {}
private static int A = 1;
public static void setA(int i) {A = i;}
public static int getA() {return A;}
}
You can also use a class to package methods. For example, two of
the class methods in java.lang.System are:
arraycopy
currentTimeMillis
These really don't have anything to do with each other, except
that they're both low-level system methods that provide services
to the user. A set of such methods are grouped together in the
System class.
A final use of class variables is to group together a set of
constants:
public class Constants {
private Constants() {}
public static final int A = 1;
public static final int B = 2;
public static final int C = 3;
}
You can do a similar thing with an interface:
public interface Constants {
int A = 1;
int B = 2;
int C = 3;
}
What are the differences between using classes and interfaces to
group constants? Here are several:
1. Interface fields are implicitly public, static, and final.
2. You cannot change an interface field once it's initialized.
By contrast, you can change a field in a class if the field is
non-final; if you're really establishing a set of constants,
you probably don't want to do this.
3. You can use static initialization blocks to set up the fields
in a class. For example:
class Constants {
public static final int A;
static {
A = 37;
}
}
4. You can implement an interface in a class to gain convenient
access to the interface's constants.
For further information about class methods and class variables,
see sections 2.2.2, Static Fields, and 2.6.1, Static Methods in
"The Java Programming Language Third Edition" by Arnold, Gosling,
and Holmes
(http://java.sun.com/docs/books/javaprog/thirdedition/).
USING PROGRESS BARS AND MONITORS IN JAVA GUI APPLICATIONS
If you have a GUI application that performs a time-consuming task,
it's desirable to let the user know that the task is being
processed. It's also a good idea to give the user a progress
indicator, such as "the task is X% finished."
The Java Swing library has a couple of mechanisms for displaying
progress. This tip examines them in the context of a real-life
application. The application is one that searches for a string in
all files under a starting directory. For example, if you're on
a UNIX system and you specify "/usr" as the starting directory,
and a pattern "programming", the application displays a list of
all the files that contain "programming" somewhere within them.
This application is time-consuming. It can take a few seconds
to a few minutes to run, depending on how big the directory
structure is and how fast your computer runs.
The search process has two distinct phases. The first iterates
across the directory structure and makes a list of all the files.
The second phase actually searches the files.
It's not possible in the strictest sense to indicate progress
during the first phase. Progress is based on percentage complete.
Here there's no way to obtain the percentage completed, because
it's not possible to tell in advance how many files are in the
directory. In the second phase, however, it's possible to
get at least a rough idea of progress. The program can determine
that, for example, 59 out of 147 files have been searched so far.
The application code looks like this:
import java.awt.GridLayout;
import java.awt.Cursor;
import java.awt.event.*;
import java.util.*;
import java.io.*;
import javax.swing.*;
import java.lang.reflect.InvocationTargetException;
public class ProgressDemo {
String startdir; // starting directory for search
String patt; // pattern to search for
JTextArea outarea; // output area for file pathnames
JFrame frame; // frame
JProgressBar progbar; // progress bar
JLabel fileslab; // number of files found
boolean search_flag; // true if search in progress
// nested class used to do actual searching
class Search extends Thread {
// do GUI updates
void doUpdate(Runnable r) {
try {
SwingUtilities.invokeAndWait(r);
}
catch (InvocationTargetException e1) {
System.err.println(e1);
}
catch (InterruptedException e2) {
System.err.println(e2);
}
}
// get a list of all the files under a given directory
void getFileList(File f, List list) {
// recurse if a directory
if (f.isDirectory()) {
String entries[] = f.list();
for (int i = 0; i < entries.length; i++) {
getFileList(new File(f, entries[i]),
list);
}
}
// for plain files, add to list and
// update progress bar
else if (f.isFile()) {
list.add(f.getPath());
final int size = list.size();
if (size % 100 != 0) {
return;
}
doUpdate(new Runnable() {
public void run() {
progbar.setValue(size % 1000);
}
});
}
}
// check whether a file contains the specified pattern
boolean fileMatch(String fn, String patt) {
boolean found = false;
try {
FileReader fr = new FileReader(fn);
BufferedReader br = new BufferedReader(fr);
String str;
while ((str = br.readLine()) != null) {
if (str.indexOf(patt) != -1) {
found = true;
break;
}
}
br.close();
}
catch (IOException e) {
System.err.println(e);
}
return found;
}
// perform the search
public void run() {
List filelist = new ArrayList();
final String sep =
System.getProperty("line.separator");
// clear old output
doUpdate(new Runnable() {
public void run() {
outarea.setText("");
fileslab.setText("");
}
});
// get the list of files and display a count
getFileList(new File(startdir), filelist);
final int size = filelist.size();
doUpdate(new Runnable() {
public void run() {
progbar.setValue(0);
fileslab.setText("Found " + size +
" files, now searching ...");
}
});
// set up a progress monitor
final ProgressMonitor pm = new ProgressMonitor(
frame, "Searching files", "", 0, size - 1);
pm.setMillisToDecideToPopup(0);
pm.setMillisToPopup(0);
// iterate across the files, updating
// the progress monitor
for (int i = 0; i < size; i++) {
final String fn = (String)filelist.get(i);
final int curr = i;
if (pm.isCanceled()) {
break;
}
final boolean b = fileMatch(fn, patt);
doUpdate(new Runnable() {
public void run() {
pm.setProgress(curr);
pm.setNote(fn);
if (b) {
outarea.append(fn + sep);
}
}
});
}
// close the progress monitor and
// set the caret position in the output
// area to the beginning of the file list
doUpdate(new Runnable() {
public void run() {
pm.close();
outarea.setCaretPosition(0);
fileslab.setText("");
}
});
search_flag = false;
}
}
public ProgressDemo() {
frame = new JFrame("ProgressDemo");
// set up the window closer for the frame
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
// set up panels
JPanel paneltop = new JPanel();
JPanel panelbot = new JPanel();
paneltop.setLayout(new GridLayout(5, 1));
JPanel panel1 = new JPanel();
panel1.add(new JLabel("Starting Directory"));
final JTextField dirfield = new JTextField(20);
panel1.add(dirfield);
JPanel panel2 = new JPanel();
panel2.add(new JLabel("Search Pattern"));
final JTextField pattfield = new JTextField(20);
panel2.add(pattfield);
JPanel panel3 = new JPanel();
JButton button = new JButton("Search");
panel3.add(button);
JPanel panel4 = new JPanel();
progbar = new JProgressBar(0, 999);
panel4.add(progbar);
JPanel panel5 = new JPanel();
fileslab = new JLabel();
panel5.add(fileslab);
JPanel panel6 = new JPanel();
outarea = new JTextArea(8, 40);
outarea.setEditable(false);
JScrollPane jsp = new JScrollPane(outarea,
ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED,
ScrollPaneConstants.HORIZONTAL_SCROLLBAR_NEVER);
panel6.add(jsp);
// processing for "Search" button
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
startdir = dirfield.getText();
patt = pattfield.getText();
if (startdir == null ||
startdir.trim().equals("") ||
patt == null ||
patt.trim().equals("")) {
JOptionPane.showMessageDialog(
frame, "Invalid input", "Error",
JOptionPane.ERROR_MESSAGE);
}
else if (search_flag) {
JOptionPane.showMessageDialog(
frame, "Search in progress",
"Error", JOptionPane.ERROR_MESSAGE);
}
else {
search_flag = true;
new Search().start();
}
}
});
paneltop.add(panel1);
paneltop.add(panel2);
paneltop.add(panel3);
paneltop.add(panel4);
paneltop.add(panel5);
panelbot.add(panel6);
JPanel panel = new JPanel();
panel.setLayout(new GridLayout(2, 1));
panel.add(paneltop);
panel.add(panelbot);
// display the frame
frame.getContentPane().add(panel);
frame.pack();
frame.setLocation(200, 200);
frame.setVisible(true);
}
public static void main(String args[]) {
new ProgressDemo();
}
}
The main method creates an object of type ProgressDemo. This part
of the application sets up the various panels, input areas, and
buttons.
The action listener for the Search button validates input. It then
creates and starts a thread of type Search. Search is an inner
class used to do the actual searching. Searching is done via a
separate thread because searching is time-consuming, and it's a
bad idea to perform lengthy processing from the event dispatch
thread. The event dispatch thread is used to handle the Search
button selection and the call of the button's action listener. If
the actual search is also performed in this thread, the thread
cannot immediately respond to other events. An example of another
event might be clicking on the top right of the main window to
terminate the application.
The actual searching is done after control is transferred to the
run method of the Search class. One piece of code you'll see
repeatedly in this part of the code is:
doUpdate(new Runnable() {
public void run() {
...
}
});
Although searching is not done from the event dispatch thread,
it is desirable that the event dispatch thread be used to update
the GUI. The repeated code above is used because Swing is not
thread-safe. The code adds a runnable object to the event
dispatch thread queue. The run method for the object is called
when the object gets to the front of the queue.
The doUpdate method is implemented using
SwingUtilities.invokeAndWait. This means that doUpdate does not
return until the run method returns. It's also possible to use
SwingUtilities.invokeLater here, but using invokeAndWait makes
for smoother GUI updating of the progress bar.
The list of files to search is accumulated by doing a recursive
directory walk, using java.io.File. Because the program doesn't
know how many files there are, it can't indicate a percentage
complete; instead it repeatedly fills a JProgressBar object.
An object of type JProgressBar is initially created, with limits
in the range 0 to 999. The bar is updated as files are found
during the directory walk. How "full" the bar is depends on the
result of applying the modulo (%) operator on the count of number
of files. In other words, the progress bar is empty with a value
of 0, and full with a value of 999. If the program finds 500 or
1500 or 2500 files thus far, the bar is half full. This scheme
doesn't indicate a percentage complete, but simply that the
directory enumeration is "doing something".
After tabulating the list of files, a ProgressMonitor object is
created. You specify a message to be displayed during the
operation (here it's "Searching files", a note describing the
state of the operation (in this example, it's null), and the
minimum and maximum values for this object (here it's
0, and the number of files - 1, respectively). Then
setProgress(currentvalue) is called to indicate progress has been
made. The logic in the ProgressMonitor class determines whether
to pop up a display showing how far along the processing is.
Because the program knows the number of files to be searched,
this approach works pretty well.
As each file is searched, ProgressDemo calls setProgress and
setNote. These ProgressMonitor methods periodically update
the display as progress is being made. Note that the progress
monitor might not display if the searching to be done is very
short. ProgressMonitor has methods you can call to tailor
the amount of time before the monitor pops up".
Another approach for the progress monitor would be to keep track
of file lengths instead of file counts. This approach is slightly
more complicated, but gives a better indication of progress.
This is especially true if you're searching files of widely
varying lengths. For example, say you have 20 files. The first
10 are one byte long, and the last 10 are each one million bytes
long. In this case, the progress monitor display will be
misleading if it's based on file counts.
There's also a class ProgressMonitorInputStream designed
specifically for indicating progress while reading files.
A good place to read more about progress bars and progress
monitors is "Graphic Java - Mastering the JFC 3rd Edition
Volume II Swing" by David Geary. See especially "Swing and
Threads" in Chapter 2, and "Progress Bars, Sliders, and
Separators" in Chapter 11.
. . . . . . . . . . . . . . . . . . . . . . .
August 29, 2000. This issue is about bytecode. Programmers
coding in the Java(tm) programming language rarely view the
compiled output of their programs. This is unfortunate, because
the output, Java bytecode, can provide valuable insight when
debugging or troubleshooting performance problems. Moreover,
the JDK makes viewing bytecode easy. This tip shows you how
to view and interpret Java bytecode. It presents the following
topics related to bytecode:
* Getting Started With javap
* How Bytecode Protects You From Memory Bugs
* Analyzing Bytecode to Improve Your Code
This tip was developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt0829.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
GETTING STARTED WITH JAVAP
Most Java programmers know that their programs are not typically
compiled into native machine code. Instead, the programs are
compiled into an intermediate bytecode format that is executed by
the Java(tm) Virtual Machine*. However, relatively few
programmers have ever seen bytecode because their tools do not
encourage them to look. Most Java debugging tools do not allow
step-by-step execution of bytecode; they either show source code
lines or nothing.
Fortunately, the JDK(tm) provides javap, a command-line tool
that makes it easy to view bytecode. Let's see an example:
public class ByteCodeDemo {
public static void main(String[] args) {
System.out.println("Hello world");
}
}
After you compile this class, you could open the .class file in a
hex editor and translate the bytecodes by referring to the virtual
machine specification. Fortunately, there is an easier way. The
JDK includes a command line disassembler called javap, which will
convert the byte codes into human-readable mnemonics. You can get
a bytecode listing by passing the '-c' flag to javap as follows:
javap -c ByteCodeDemo
You should see output similar to this:
public class ByteCodeDemo extends java.lang.Object {
public ByteCodeDemo();
public static void main(java.lang.String[]);
}
Method ByteCodeDemo()
0 aload_0
1 invokespecial #1
4 return
Method void main(java.lang.String[])
0 getstatic #2
3 ldc #3
5 invokevirtual #4
8 return
From just this short listing, you can learn a lot about bytecode.
Begin with the first instruction in the main method:
0 getstatic #2
The initial integer is the offset of the instruction in the method.
So the first instruction begins with a '0'. The mnemonic for the
instruction follows the offset. In this example, the 'getstatic'
instruction pushes a static field onto a data structure called the
operand stack. Later instructions can reference the field in this
data structure. Following the getstatic instruction is the field
to be pushed. In this case the field to be pushed is
"#2 ." If you examined the
bytecode directly, you would see that the field information is not
embedded directly in the instruction. Instead, like all constants
used by a Java class, the field information is stored in a shared
pool. Storing field information in a constant pool reduces the
size of the bytecode instructions. This is because the
instructions only have to store the integer index into the
constant pool instead of the entire constant. In this example,
the field information is at location #2 in the constant pool.
The order of items in the constant pool is compiler dependent,
so you might see a number other than '#2.'
After analyzing the first instruction, it's easy to guess the
meaning of the other instructions. The 'ldc' (load constant)
instruction pushes the constant "Hello, World." onto the operand
stack. The 'invokevirtual' invokes the println method, which pops
its two arguments from the operand stack. Don't forget that an
instance method such as println has two arguments: the obvious
string argument, plus the implicit 'this' reference.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
HOW BYTECODE PROTECTS YOU FROM MEMORY BUGS
The Java programming language is frequently touted as a "secure"
language for internet software. Given that the code looks so
much like C++ on the surface, where does this security come
from? It turns out that an important aspect of security is the
prevention of memory-related bugs. Computer criminals exploit
memory bugs to sneak malicious code into otherwise safe programs.
Java bytecode is a first line of defense against this sort of
attack, as the following example demonstrates:
public float add(float f, int n) {
return f + n;
}
If you add this function to the previous example, recompile it, and
run javap, you should see bytecode similar to this:
Method float add(float, int)
0 fload_1
1 iload_2
2 i2f
3 fadd
4 freturn
At the beginning of a Java method, the virtual machine places
method parameters in a data structure called the local variable
table. As its name suggests, the local variable table also
contains any local variables that you declare. In this example,
the method begins with three local variable table entries, these
are for the three arguments to the add method. Slot 0 holds the
this reference, while slots 1 and 2 hold the float and int
arguments, respectively.
In order to actually manipulate the variables, they must be loaded
(pushed) onto the operand stack. The first instruction, fload_1,
pushes the float at slot 1 onto the operand stack. The second
instruction, iload_2, pushes the int at slot 2 onto the operand
stack. The interesting thing about these instructions is in the 'i'
and 'f' prefixes, which illustrate that Java bytecode instructions
are strongly typed. If the type of an argument does not match the
type of the bytecode, the VM will reject the bytecode as unsafe.
Better still, the bytecodes are designed so that these type-safety
checks need only be performed once, at class load time.
How does this type-safety enhance security? If an attacker could
trick the virtual machine into treating an int as a float, or vice
versa, it would be easy to corrupt calculations in a predictable
way. If these calculations involved bank balances, the security
implications would be obvious. More dangerous still would be
tricking the VM into treating an int as an Object reference. In
most scenarios, this would crash the VM, but an attacker needs to
find only one loophole. And don't forget that the attacker doesn't
have to search by hand--it would be pretty easy to write a program
that generated billions of permutations of bad byte codes, trying
to find the lucky one that compromised the VM.
Another case where bytecode safeguards memory is array
manipulation. The 'aastore' and 'aaload' bytecodes operate on
Java arrays, and they always check array bounds. These bytcodes
throw an ArrayIndexOutOfBoundsException if the caller passes the
end of the array. Perhaps the most important checks of all apply
to the branching instructions, for example, the bytecodes that
begin with 'if.' In bytecode, branching instructions can only
branch to another instruction within the same method. The only
way to transfer control outside a method is to return, throw an
exception, or execute one of the 'invoke' instructions. Not only
does this close the door on many attacks, it also prevents nasty
bugs caused by dangling references or stack corruption. If you have
ever had a system debugger open your program to a random location
in code, you're familiar with these bugs.
The critical point to remember about all of these checks is that
they are made by the virtual machine at the bytecode level, not
just by the compiler. A compiler for a language such as C++ might
prevent some of the memory errors discussed above, but its
protection applies only at the source code level. Operating
systems will happily load and execute any machine code, whether
the code was generated by a careful C++ compiler or a malicious
attacker. In short, C++ is object-oriented only at the source code
level, however Java's object-oriented features extend down to the
compiled code.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ANALYZING BYTECODE TO IMPROVE YOUR CODE
The memory and security protections of Java bytecode are there for
you whether you notice them or not, so why bother looking at the
bytecode? In many cases, knowing how the compiler translates your
code into bytecode can help you write more efficient code, and can
sometimes even prevent insidious bugs. Consider the
following example:
//return the concatenation str1+str2
String concat(String str1, String str2) {
return str1 + str2;
}
//append str2 to str1
void concat(StringBuffer str1, String str2) {
str1.append(str2);
}
Try to guess how many function calls each method requires to
execute. Now compile the methods and run javap. You should see
output like this:
Method java.lang.String concat1(java.lang.String, java.lang.String)
0 new #5
3 dup
4 invokespecial #6
7 aload_1
8 invokevirtual #7
11 aload_2
12 invokevirtual #7
15 invokevirtual #8
18 areturn
Method void concat2(java.lang.StringBuffer, java.lang.String)
0 aload_1
1 aload_2
2 invokevirtual #7
5 pop
6 return
The concat1 method makes five method calls: new, invokespecial,
and three invokevirtuals. That is quite a bit more work than the
concat2 method, which makes only a single invokevirtual call. Most
Java programmers have been warned that because Strings are
immutable it is more efficient to use StringBuffers for
concatenation. Using javap to analyze this makes the point in
dramatic fashion. If you are unsure whether two language
constructs are equivalent in performance, you should use javap
to analyze the bytecode. Beware of the just-in-time (JIT)
compiler, though. Because the JIT compiler recompiles the
bytecodes into native machine code, it can apply additional
optimizations that your javap analysis does not reveal. Unless
you have the source code for your virtual machine, you need to
supplement your bytecode analysis with performance benchmarks.
A final example illustrates how examining bytecode can help
prevent bugs in your application. Create two classes as
follows. Make sure they are in separate files.
public class ChangeALot {
public static final boolean debug=false;
public static boolean log=false;
}
public class EternallyConstant {
public static void main(String [] args) {
System.out.println("EternallyConstant beginning execution");
if (ChangeALot.debug)
System.out.println("Debug mode is on");
if (ChangeALot.log)
System.out.println("Logging mode is on");
}
}
If you run the class EternallyConstant you should get the message:
EternallyConstant beginning execution.
Now try editing the ChangeALot file, modifying the debug and log
variables to both be true. Recompile only the ChangeALot file.
Run EternallyConstant again, and you will see the following
output:
EternallyConstant beginning execution
Logging mode is on
What happened to the debugging mode? Even though you set debug to
true, the message "Debug mode is on" didn't appear. The answer is
in the bytecode. Run javap on the EternallyConstant class, and you
will see this:
Method void main(java.lang.String[])
0 getstatic #2
3 ldc #3
5 invokevirtual #4
8 getstatic #5
11 ifeq 22
14 getstatic #2
17 ldc #6
19 invokevirtual #4
22 return
Surprise! While there is an 'ifeq' check on the log field, the
code does not check the debug field at all. Because the debug
field was marked final, the compiler knew that the debug field
could never change at runtime. Therefore, it optimized the 'if'
statement branch by removing it. This is a very useful
optimization indeed, because it allows you to embed debugging
code in your application and pay no runtime penalty when the
switch is set to false. Unfortunately, this optimization can
lead to major compile-time confusion. If you change a final field,
you have to remember to recompile any other class that might
reference the field. That's because the 'reference' might have
been optimized away. Java development environments do not always
detect this subtle dependency, something that can lead to very
odd bugs. So, the old C++ adage is still true for the Java
environment. "When in doubt, rebuild all."
Knowing a little bytecode is a valuable assist to any programmer
coding in the Java programming language. The javap tool makes it
easy to view bytecodes. Occasionally checking your code with javap
can be invaluable in improving performance and catching
particularly elusive bugs.
There is substantially more complexity to bytecode and the VM
than this tip can cover. To learn more, read Inside the Java
Virtual Machine by Bill Venners.
. . . . . . . . . . . . . . . . . . . . . . .
August 1, 2000. This issue is about the Java(tm) Native Interface
(JNI). JNI is a powerful tool for building Java applications that
interoperate with other languages, especially C++. An important
thing to understand when you use JNI to integrate C++ code into
a program written in the Java(tm) programming language is how JNI
forces the Java and C++ memory management models to coexist in one
process. This issue of the JDC Tech Tips covers two memory
management issues that arise in JNI programming:
* Caching objects in JNI
* Accessing arrays in JNI
These tips assume that you have some familiarity with JNI and that
you know how to compile native JNI libraries with your C++ compiler
of choice. If you are unfamiliar with JNI, see the Java Native
Interface trail in the Java tutorial at
http://java.sun.com/docs/books/tutorial/native1.1/index.html
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
This issue of the JDC Tech Tips is written by Stuart Halloway,
a Java specialist at DevelopMentor (http://www.develop.com/java).
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt0801.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CACHING OBJECTS IN JNI
One of the features of JNI is that it allows your native code,
such as C++, to use Java objects. However this sometimes presents
a problem in dealing with the "lifetime" of objects, that is, the
time between an object's allocation and deallocation. Java manages
an object's allocation through new, and indirectly manages its
deallocation through garbage collection. However C++ requires
explicit control of the entire lifetime through new and
delete. Because JNI straddles both the world of the Java language
and C++, an awkward compromise must be reached. JNI provides
explicit mechanisms to manage an object's lifetime, as in C++. But
these mechanisms do not directly control lifetime. Instead, they
give hints to the Java garbage collector. This creates a difficult
situation for the developer. JNI object references have
non-deterministic destruction, as in the Java environment; you
cannot determine specifically when an object's resources will be
reclaimed. And misusing JNI object references can crash the entire
process, as in C++!
This tip shows you how to correctly manage an object's
deallocation in your JNI code.
Let's look at a simple example that uses native code to find the
maximum value in an array of integers:
//java code Max.java
import java.util.*;
public class Max {
public static final int ARRAY_SIZE = 1000;
public static int[] arr = initAnArray();
static {
System.loadLibrary("Max");
}
public static int[] initAnArray() {
int[] arr = new int[ARRAY_SIZE];
Random rnd = new Random();
for (int n=0; n max) {
max = current;
}
}
return max;
}
public static native int nativeMax(int[] mins);
public static native int nativeMaxCritical(int[] mins);
public static void main(String [] args) {
System.out.println("max=" + max(arr));
//System.out.println("nativeMax=" + nativeMax(arr));
//System.out.println("nativeMaxCritical=" + nativeMaxCritical(arr));
}
}
This program calls a max function that is implemented in Java
code. There are also calls, initially commented out, to two other
versions of the max function: nativeMax() and nativeMaxCritical().
When the calls are uncommented, the functions will need native
language implementations, such as C++.
It would be nice if the native code could take advantage of
certain Java programming language features, such as using
System.out.println() for logging messages to the console.
One way to add this feature is to implement the JNI_OnLoad
method in your C++ library:
//C++ CODE Max.cpp
#include
#include
//cache the methodID and object needed to call System.out.println
static jmethodID midPrintln;
static jobject objOut;
extern "C" {
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *reserved)
{
JNIEnv* env = 0;
jclass clsSystem = 0;
jclass clsPrintStream = 0;
jfieldID fidOut = 0;
jstring msg = 0;
if (JNI_OK != vm->GetEnv((void **)&env, JNI_VERSION_1_2)) {
return JNI_ERR;
}
clsSystem = env->FindClass("java/lang/System");
if (!clsSystem) return JNI_ERR;
clsPrintStream = env->FindClass("java/io/PrintStream");
if (!clsPrintStream) return JNI_ERR;
fidOut = env->GetStaticFieldID(clsSystem, "out", "Ljava/io/PrintStream;");
if (!fidOut) return JNI_ERR;
objOut = env->GetStaticObjectField(clsSystem, fidOut);
if (!objOut) return JNI_ERR;
midPrintln = env->GetMethodID(clsPrintStream, "println", "(Ljava/lang/String;)V");
if (!midPrintln) return JNI_ERR;
msg = env->NewStringUTF("MAX library loaded");
if (!msg) return JNI_ERR;
env->CallVoidMethod(objOut, midPrintln, msg);
return JNI_VERSION_1_2;
}
}
The JNI_OnLoad entry point is called once, when the native library
is loaded by a call to System.loadLibrary. In this example, the
line:
env->CallVoidMethod(objOut, midPrintln, msg);
actually does the work of calling System.out.println. Before this
line of code can execute, some preparation must take place. The
calls to FindClass return jclass references to java.lang.System
(to reach the out field) and java.io.PrintStream (to reach the
println method). In JNI fields and methods must be accessed by
first requesting an ID, done here by the GetStaticFieldID and
GetMethodID methods. Finally, the string to be printed must be
allocated using the NewStringUTF helper method. Notice that
midPrintln and objOut are cached in static variables. This helps
avoid having to do all the preparation work the next time
System.out.println is used. Cacheing is also an important
performance optimization in JNI -- you do not want to repeatedly
look up objects and ids.
Compile both the Java code and the C++ code into the same
directory. Then run the program from that directory using the
command:
java -cp . Max
You should see the output "MAX library loaded." in your
System.out.
Although this code seems to work, it does not correctly manage
object references. Referring back to the code, notice that
the methods on the JNIEnv* fall into two categories: (1) those
that return IDs, and (2) those that return some type of object
reference. You do not need to worry about the IDs because they
do not represent any special claim on resources. The methods and
fields are there as long as the class is loaded, whether you use
them from JNI or not. The object references are more challenging.
Unless otherwise documented, all JNI methods return local
references. A local reference is a thread-local, method-local
handle to a Java object. In other words, you have permission to
use the object only for the duration of the JNI method, and only
from the calling thread. This gives the garbage collector
a well-defined opportunity to collect the object, that is,
when you return from a method.
The JNI_OnLoad method above obtains four local references:
clsSystem, clsPrintStream, objOut, and msg. Each of these
references is valid only for the duration of the JNI_OnLoad call.
For clsSystem, clsPrintStream, and msg, this is exactly what you
want; these objects are only used within the method. Just as in
the Java programming language, you do not have to worry about
deallocating these objects. Garbage collection will take care
of them. However the objOut handle is processed differently. It
is cached in a static variable for later use. This leads to
undefined behavior, that is, there is no guarantee that the
handle is still valid. The following native methods demonstrate
the problem:
//make sure these are inside the extern "C" block
JNIEXPORT jint JNICALL Java_Max_nativeMax
(JNIEnv *env, jclass, jintArray arr)
{
jstring msg = env->NewStringUTF("nativeMax not implemented yet");
if (!msg) return 0;
env->CallVoidMethod(objOut, midPrintln, msg);
return 0;
}
JNIEXPORT jint JNICALL Java_Max_nativeMaxCritical
(JNIEnv *env, jclass, jintArray arr)
{
jstring msg = env->NewStringUTF("nativeMaxCritical not implemented yet");
if (!msg) return 0;
env->CallVoidMethod(objOut, midPrintln, msg);
return 0;
}
In the next tip, these methods will have complete implementations,
but for now they just use System.out.println to report that they
are incomplete. Go back and uncomment the calls to nativeMax and
nativeMaxCritical in Max.main, and try running the program.
Depending on which Java(tm) Runtime Environment (JRE) and
underlying OS you are using, one of several things might happen:
- the program might crash
- the program might run normally
- the program might fail with a "FATAL ERROR in native method"
This kind of unpredictable behavior never happens in Java
programs, but is standard for C++ programs. Unfortunately, JNI
code is similar to C++ code in that the behavior of the code
that mismanages memory is undefined. Undefined behavior is much
worse than a simple crash because you might not realize there
is a program bug. This is particularly true if the code often
runs normally (sometimes known as the "it worked on my machine"
syndrome). Undefined behavior makes finding code defects very
difficult.
JRE 1.2 and the classic VM of JRE 1.3 have a non-standard
command line option that can help you track down JNI bugs. Try
running the program again with the "-Xcheck:jni" option. If you
are running JRE 1.3, you will have to select the classic VM
with the classic option:
(if 1.2) java -cp . -Xcheck:jni Max
(if 1.3) java -classic -cp . -Xcheck:jni Max
If you are lucky, you will get the following descriptive error:
FATAL ERROR in native method: Bad global or local ref passed to JNI
at Max.nativeMax(Native Method)
at Max.main(Max.java:75)
It is a good idea to use the "-Xcheck:jni" flag during
development, but you should not count on this to find all
JNI-related problems. The best approach is careful analysis
of your java object references, plus code review.
In the example above, fixing the objOut reference is a simple
matter. Instead of a local reference, objOut should be stored in
a global reference. While a local reference is bound to a thread
and method call, a global reference lives until you specifically
delete it. The NewGlobalRef function creates a global reference
to any existing reference. Modify the JNI_OnLoad function,
that is, replace the following lines in JNI_Onload:
objOut = env->GetStaticObjectField(clsSystem, fidOut);
if (!objOut) return JNI_ERR;
with the following lines:
jobject localObjOut = env->GetStaticObjectField(clsSystem, fidOut);
if (!localObjOut) return JNI_ERR;
objOut = env->NewGlobalRef(localObjOut);
Notice that the static type of a global reference is the same as
the static type of a local reference (both are jobject). This
means that you must remember which references are global and which
are local; the compiler will not assist you. In the code above,
objOut holds a global reference which will prevent the garbage
collector from invalidating the reference. In this example,
a global reference provides exactly the desired behavior, keeping
the reference cached for the lifetime of the application. If you
need a reference to live longer than a method, but not forever,
you can match the call to NewGlobalRef() with a subsequent call to
DeleteGlobalRef().
If you recompile the C++ library with this new code, Max should
run correctly, and -Xcheck:jni should not report any problems.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ACCESSING ARRAYS IN JNI
Now that the object references are in order, it is time to
actually implement the max method in C++ code. If you scanned
jni.h, you would find three methods that offer access to an array
of Java integers:
GetIntArrayRegion(jintArray, jsize start, jsize len, jint *buf)
jint* GetIntArrayElements(jintArray array, jboolean *isCopy)
void* GetPrimitiveArrayCritical(jintArray array, jboolean *isCopy)
While each of these methods can be used to access any int array,
they have radically different semantics and performance
characteristics. Choosing the right one is critical to writing
correct, high-performance code.
GetIntArrayRegion is the simplest to use, because you never touch
the actual array data. Instead, you allocate a buffer, and some
portion of the array is copied into your buffer. Because the array
is copied, GetIntArrayRegion is rarely the best option for high
performance.
GetIntArrayElements asks the JRE to give you a pointer into
the actual array data. Sharing array memory with the JRE is
is called "pinning" the array, and when you are done you
must unpin the array with a call to ReleaseIntArrayElements.
Think of GetIntArrayElements as a polite request for a pointer
to the array data; it is not a demand for a pointer. You can use
the isCopy parameter to find out if your data is the actual array
data or your own private copy.
GetPrimitiveArrayCritical was added to JDK 1.2 to improve the
performance of array operations. Like GetIntArrayElements, the
critical API also asks the JRE for a pointer to the real data,
but this time the question is more of a demand. The critical API
tells the JRE to do everything possible to provide direct access.
This can include blocking other threads and even disabling all
garbage collection to guarantee safe access to the array data.
Because the JRE might be blocking many other operations while
you are accessing the array, you should exit the critical region
as soon as possible. Do this by calling
ReleasePrimitiveArrayCritical. Also, be careful not to call
other JNI functions, or do anything that could cause the current
thread to block.
Which array API is best for the max example? In the example, the
array data is traversed a single time and in read-only fashion.
This is a case where direct access to the data should provide a
substantial speedup. So you should probably use
GetIntArrayElements or GetPrimitiveArrayCritical. Here's the code
for each:
JNIEXPORT jint JNICALL Java_Max_nativeMax
(JNIEnv *env, jclass, jintArray arr)
{
jstring msg = env->NewStringUTF("in nativeMax");
if (!msg) return 0;
env->CallVoidMethod(objOut, midPrintln, msg);
jboolean isCopy = JNI_FALSE;
long* elems = env->GetIntArrayElements(arr, &isCopy);
if (!elems) return 0; //exception already pending
long length = env->GetArrayLength(arr);
long max = INT_MIN;
long current = 0;
for (int n=0; n max) {
max = current;
}
}
env->ReleaseIntArrayElements(arr, elems, JNI_ABORT);
return max;
}
JNIEXPORT jint JNICALL Java_Max_nativeMaxCritical
(JNIEnv *env, jclass, jintArray arr)
{
jstring msg = env->NewStringUTF("in nativeMaxCritical");
if (!msg) return 0;
env->CallVoidMethod(objOut, midPrintln, msg);
jboolean isCopy = JNI_FALSE;
long* elems = (long*) env->GetPrimitiveArrayCritical(arr, &isCopy);
if (!elems) return 0; //exception already pending
long length = env->GetArrayLength(arr);
long max = INT_MIN;
long current = 0;
for (int n=0; n max) {
max = current;
}
}
env->ReleasePrimitiveArrayCritical(arr, elems, JNI_ABORT);
return max;
}
Notice that the two versions of the code are almost identical.
They differ in the names of the Get/Release pair. The array code
itself is trivial. In fact, the only interesting detail is the
third parameter to the release function: JNI_ABORT. The JNI_ABORT
flag specifies that if you are using a local copy of the array,
there is no need to copy back to the real array. If you wind up
working with a copy of the array, this is a major performance
savings. Since the array was never written to, it's silly to copy
it back.
The behavior of GetIntArrayElements and GetPrimitiveArrayCritical
is not guaranteed. Either API can at any time return a copy or
a direct pointer to the data. This means that you have to test
your code on your specific JRE to determine whether you are
getting a performance boost from direct access.
Here is a summary of results obtained from testing the max
example on the 1.2 and 1.3 JREs. A debugger was used to check
the isCopy value. Benchmark code was used to compare the
performance of the three max implementations. You can find the
benchmark code at
http://staff.develop.com/halloway/JavaTools.html.
---------------------------------------------------------------
Test Copied Array? Time (microsec)
---------------------------------------------------------------
1.2 max no 18
1.2 nativeMax no 18
1.2 nativeMaxCritical no 15
1.3 max no 25
1.3 nativeMax yes 27
1.3 nativeMaxCritical no 15
---------------------------------------------------------------
Key:
1.2 tests are with classic VM, JIT
1.3 tests are with the Java HotSpot(tm) Server VM
---------------------------------------------------------------
It would be unwise to jump to any conclusions from these results.
The result will differ on different machines or with different
sized arrays. However the results do suggest that:
(1) Copying arrays is expensive. In the one case (1.3 nativeMax)
where the array was copied, performance was noticeably slower.
(2) Native code is not always faster then equivalent Java code.
Even when native code is faster, it doesn't represent an
order of magnitude improvement.
(3) It is difficult to benchmark HotSpot code. HotSpot tends to
fare poorly on benchmarks, but to shine in real applications
Also, a simple looping benchmark cannot tell you much about the
behavior of a heavily threaded (read: server) application. If a
JRE blocks other threads in order to give direct access to memory,
overall throughput can actually be worse with direct access to
arrays. In that situation, it would be better to use the
GetIntArrayRegion API to create a working copy of the array.
As you can see, JNI code becomes tricky to write as soon as you
begin to do any serious work. You must explicitly manage the
lifetime of objects by correctly choosing local or global
references, and run tests to determine the array accessor that
gives the best performance for your application.
For further information about JNI, see the following publications:
o The Java Native Interface: Programmer's Guide and Specification
(Java Series), by Sheng Liang
(http://java.sun.com/docs/books/jni/index.html).
o Java Platform Performance Strategies and Tactics (Java Series),
by Steve Wilson and Jeff Kesselman.
(http://java.sun.com/docs/books/performance/). There is a very
interesting chapter on JNI performance.
. . . . . . . . . . . . . . . . . . . . . . .
July 11, 2000. This issue covers:
* Using Shutdown Hooks
* Automating GUI Programs With java.awt.Robot
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.3.
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt0711.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING SHUTDOWN HOOKS
Suppose that you're writing an application, and you'd like to
gain control when the application shuts down. You might want
to do this in order to close files that are open, for example,
close a log file that the application has written to.
One way to gain control is simply to have a shutdown method that
you call before calling exit:
callShutdown();
System.exit(0);
This approach works if your application terminates only in one
place, that is, System.exit is called only one place, or if you
specify callShutdown everywhere that you exit. This approach also
requires that you catch all exceptions that are thrown, which
otherwise would terminate the program abnormally.
In JDK 1.3 there's another way to handle shutdown: using shutdown
hooks. A shutdown hook is an initialized thread that has not yet
been executed. In other words, a shutdown hook is an object of a
class derived from the Thread class, with a run method that is
called to perform whatever actions you want. You register this
object with the Java(tm) Virtual Machine (JVM)*.
Here's an example:
import java.io.*;
public class ShutdownDemo {
private FileWriter fw_log;
private BufferedWriter bw_log;
// constructor that opens the log file
public ShutdownDemo() throws IOException {
fw_log = new FileWriter("log.txt");
bw_log = new BufferedWriter(fw_log);
// register the shutdown hook
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
endApp();
}
});;
}
// do some application processing and write to the log file
public void processApp1() throws IOException {
bw_log.write("testing");
bw_log.newLine();
}
// do some application processing resulting in an exception
public void processApp2() {
throw new RuntimeException();
}
// close the log file
public void endApp() {
try {
bw_log.close();
}
catch (IOException e) {
System.err.println(e);
}
}
public static void main(String args[]) throws IOException {
// create an application object
ShutdownDemo demo = new ShutdownDemo();
// do some processing
demo.processApp1();
// do some more processing that results in an exception
demo.processApp2();
}
}
This application creates an instance of the ShutdownDemo class,
representing an application. The constructor for the class opens
a log file, using FileWriter and BufferedWriter.
ShutdownDemo then calls processApp1. In its processing, processApp1
writes an entry to the log file. Then processApp2 is called. It
throws an exception that is not caught by the application. Normally,
this exception would terminate the application; the log file
entry previously written would be lost because the output is sitting
in a buffer which has not been flushed to disk.
But in this demo the output is not lost. This is because the
application registers a shutdown hook by saying:
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
endApp();
}
});;
Notice the use of an anonymous inner class. Here an instance of an
unnamed class derived from Thread is created, and a run method that
calls endApp is defined for the class.
All of this means that when the application is about to terminate,
the JVM starts the thread representing by the passed-in thread
object. When the thread starts, the run method is called. The run
method calls endApp, which closes the log file. This flushes the
output buffer.
To underscore the effect of the shutdown hook, comment out the
addShutdownHook lines in ShutdownDemo. You'll see that the log
file is empty when the program terminates.
You can register multiple shutdown hooks. In this case, each thread
that represents a hook is started in an unspecified order, and the
various threads run simultaneously. You cannot register or
unregister a shutdown hook after the shutdown sequence has started.
Doing so results in an IllegalStateException.
Because shutdown hooks run as threads, you must use thread-safe
programming techniques. Otherwise you risk having threads interfere
with each other. Also, it's wise to design your application for
simple and fast shutdown processing. For example, you might run
into trouble if your application uses services during shutdown that
are themselves in the processing of being shut down.
There are cases where shutdown processing does not happen even if
you have registered shutdown hooks. One example is corrupted native
methods, for example, when you dereference a null pointer in C code.
This feature is somewhat similar to the atexit library function in
C/C++.
For further information about shutdown hooks, see:
http://java.sun.com/j2se/1.3/docs/guide/lang/enhancements.html#hooks.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AUTOMATING GUI PROGRAMS WITH JAVA.AWT.ROBOT
Imagine that you have a Java GUI program written using the AWT and
Swing libraries, and you'd like to automate the program. For
example, you'd like to automatically supply GUI input events from
the keyboard and mouse to the program. In this way, the program
could operate without user intervention. This type of automation
might be useful for testing purposes, or to produce a self-running
demo program.
java.awt.Robot is a new class in JDK 1.3, designed to handle this
type of automation. It's a way to automatically feed input events
to a program. The events are generated in the native input queue
of the platform, as if they had actually been generated by the
user. For example, using java.awt.Robot, you can generate an event
that is equivalent to a user moving the mouse to particular (X,Y)
coordinates on the screen.
Here's an example of how you can use this class:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class RobotDemo {
public static void main(String args[]) throws AWTException {
// set up frames and panels
JFrame frame = new JFrame("RobotDemo");
JPanel panel = new JPanel();
panel.setLayout(new GridLayout(3, 1));
// set up fields, labels, and buttons
final JTextField field = new JTextField(10);
final JLabel lab = new JLabel();
field.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
String s = "Length: " +
field.getText().length();
lab.setText(s);
}
});
JButton button = new JButton("Exit");
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
System.exit(0);
}
});
// add components to panel and display
panel.add(field);
panel.add(lab);
panel.add(button);
frame.getContentPane().add(panel);
frame.setSize(200, 150);
frame.setLocation(200, 200);
frame.setVisible(true);
// create a robot to feed in GUI events
Robot rob = new Robot();
// enter some keystrokes
int keyinput[] = {
KeyEvent.VK_T,
KeyEvent.VK_E,
KeyEvent.VK_S,
KeyEvent.VK_T,
KeyEvent.VK_I,
KeyEvent.VK_N,
KeyEvent.VK_G
};
rob.delay(1000);
rob.keyPress(KeyEvent.VK_SHIFT);
field.requestFocus();
for (int i = 0; i < keyinput.length; i++) {
rob.keyPress(keyinput[i]);
rob.delay(1000);
}
rob.keyRelease(KeyEvent.VK_SHIFT);
rob.keyPress(KeyEvent.VK_ENTER);
// move cursor to Exit button
Point p = button.getLocationOnScreen();
rob.mouseMove(p.x + 5, p.y + 5);
rob.delay(2000);
// press and release left mouse button
rob.mousePress(InputEvent.BUTTON1_MASK);
rob.delay(2000);
rob.mouseRelease(InputEvent.BUTTON1_MASK);
}
}
The demo sets up a panel containing an input field, a label, and
an Exit button. This part of the demo is typical of many Swing
applications. Then the demo creates a Robot object, and feeds
into it a series of keystrokes. These keystrokes mimic keys typed
by a user, that is, the keys T, E, S, T, I, N, and G. There is
a delay of 1000 milliseconds between keystrokes; this helps
present the animation more clearly. The shift key is held down
throughout, so that the letters are entered as capitals. At the
end of the text input, Enter is specified. This causes the length
of the input to be echoed in the label field. Then the mouse cursor
is moved to the Exit button, and the left mouse button is pressed
and released. This terminates the program.
Notice that virtual keycodes are used to enter keystrokes; keycodes
are not the same as Java characters. KeyEvent.VK_A corresponds to
pressing the unshifted 'A' key on a keyboard. If you specify 'A' or
'a' instead of KeyEvent.VK_A, you get unexpected results.
Also note that the documentation for Robot says that some platforms
require special privileges to access low-level input control. One
specific case is X Windows, which requires the XTEST 2.2 standard
extension.
For further information about java.awt.Robot, see
http://java.sun.com/j2se/1.3/docs/api/java/awt/Robot.html.
For further information about key events, see
http://java.sun.com/j2se/1.3/docs/api/java/awt/event/KeyEvent.html.
For further information about mouse events, see
http://java.sun.com/j2se/1.3/docs/api/java/awt/event/MouseEvent.html.
. . . . . . . . . . . . . . . . . . . . . . .
June 13, 2000. This issue covers:
* Using BreakIterator to Parse Text
* Goto Statements and Java(tm) Programming
These tips were developed using Java(tm) 2 SDK, Standard Edition,
v 1.2.2.
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt0613.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING BREAKITERATOR TO PARSE TEXT
The standard Java(tm) packages such as java.util include several
classes that you can use to break text into words or other logical
units. One of these classes is java.util.StringTokenizer. When you
use StringTokenizer, you specify a set of delimiter characters;
instances of StringTokenizer then return words delimited by these
characters. java.io.StreamTokenizer is a class that does something
similar.
These classes are quite useful. However they have some limitations.
This is especially true when you're trying to parse text that
represents human language. For example, the classes don't have
built-in knowledge of punctuation rules, and the classes might
define a "word" as simply a string of contiguous non-whitespace
characters.
java.text.BreakIterator is a class specifically designed to parse
human language text into words, lines, and sentences. To see how it
works, here's a simple example:
import java.text.BreakIterator;
public class BreakDemo1 {
public static void main(String args[]) {
// string to be broken into sentences
String str = "\"Testing.\" \"???\" (This is a test.)";
// create a sentence break iterator
BreakIterator brkit =
BreakIterator.getSentenceInstance();
brkit.setText(str);
// iterate across the string
int start = brkit.first();
int end = brkit.next();
while (end != BreakIterator.DONE) {
String sentence = str.substring(start, end);
System.out.println(start + " " + sentence);
start = end;
end = brkit.next();
}
}
}
The input string is:
"Testing." "???" (This is a test.)
It is immediately apparent that parsing this input is not trivial.
For example, suppose you follow a simple rule that a sentence ends
with a period. Well, actually, it doesn't. The fact that it
doesn't is demonstrated by the following two sentences, both
of which are considered correct:
"This is a test."
"This is a test".
The first of these sentences is more standard relative to
long-standing English usage.
BreakIterator applies a set of rules to handle situations such as
this. When you run the BreakDemo1 program in the United States
locale, the result is:
0 "Testing."
11 "???"
17 (This is a test.)
The numbers are offsets into the string where each sentence starts.
In other words, BreakIterator return a series of offsets that tell
where some particular unit (sentence, word) starts in a string.
BreakIterator is particularly useful in applications such as word
processing, where, for example, you might be trying to find the
location of the next sentence in some currently displayed text.
The demo program uses default locale settings, but it could have
specified a specific locale, for example:
... BreakIterator.getSentenceInstance(Locale.GERMAN);
Another way you can use BreakIterator is to find line breaks,
that is, locations in text where a line could be broken for
text formatting. Here's an example:
import java.text.BreakIterator;
public class BreakDemo2 {
public static void main(String args[]) {
// string to be broken into sentences
String str = "This sen-tence con-tains hyphenation.";
// create a line break iterator
BreakIterator brkit =
BreakIterator.getLineInstance();
brkit.setText(str);
// iterate across the string
int start = brkit.first();
int end = brkit.next();
while (end != BreakIterator.DONE) {
String sentence = str.substring(start, end);
System.out.println(start + " " + sentence);
start = end;
end = brkit.next();
}
}
}
Program output is:
0 This
5 sen-
9 tence
15 con-
19 tains
25 hyphenation.
BreakIterator applies punctuation rules about where text can be
broken, such as between words or within a hyphenated word (but not
between a word and a following ".").
You can also use BreakIterator to find word and character breaks.
It's important to note that in finding breaks, BreakIterator
analyzes characters independently of how they are stored.
A "character" in a human language is not necessarily equivalent to
a single Java 16-bit char. For example, an accented character might
be stored as a base character along with a mark. BreakIterator
analyzes these kinds of composite characters as a single character.
One final note about BreakIterator: it's intended for use with
human languages, not computer ones. For example, a "sentence" in
programming language source code has little meaning.
For more information about BreakIterator, see
http://java.sun.com/products//jdk/1.2/docs/api/java/text/BreakIterator.html
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
GOTO STATEMENTS AND JAVA(TM) PROGRAMMING
Suppose you write a C/C++ program that searches a 5 x 5 array
to find the first occurrence of a particular value. You might use
the following approach:
#include
/* 5 x 5 array of numbers */
#define N 5
static int vec[N][N] = {
{1, 2, 3, 4, 5},
{2, 3, 4, 5, 6},
{3, 4, 5, 6, 7},
{4, 5, 6, 7, 8},
{5, 6, 7, 8, 9}
};
/* target number to be searched for */
static int TARGET = 8;
int main() {
int i = 0;
int j = 0;
int found = 0;
/* iterate through the array, looking for the target */
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
if (vec[i][j] == TARGET) {
found = 1;
goto done;
}
}
}
done:
if (found) {
printf("Found at %d %d\n", i, j);
}
return 0;
}
If you run the program, you get the result:
Found at 3 4
In this example, a loop nested in another loop is used to find
the matching array element. If the program finds the element, it
needs to "break" from the nested loops. It's not sufficient to
simply break from the inner loop. Doing that only takes the program
to the outer loop, it does not actually terminate both loops. So
a goto is used to jump out of the inner loop and transfer control
to the "done:" label. Using a goto is not the only way to solve the
problem in C/C++, but this is one place where a goto is sometimes
used.
Goto statements are controversial. One problem is that it's hard
to control the program logic effectively if you use these
statements. For example, look again at the program above. It's
clear that the "found" test that is just after the "done:" label
is intended for use after the loop has terminated (that is, after
the loop terminates normally or through the goto). But there's no
way to enforce this rule; control can be transferred to this label
from anywhere in the function.
In the Java(tm) programming language, goto is a reserved word;
the Java programming language does not have a goto statement.
However there are alternative statements that you can use in
the Java programming language in place of the goto statement.
This tip demonstrates three alternative statements.
The first of these is a rewrite of the above program:
public class ControlDemo1 {
// 5 x 5 array of numbers
static int vec[][] = {
{1, 2, 3, 4, 5},
{2, 3, 4, 5, 6},
{3, 4, 5, 6, 7},
{4, 5, 6, 7, 8},
{5, 6, 7, 8, 9}
};
static final int N = 5;
// target number to be searched for
static final int TARGET = 8;
public static void main(String args[]) {
int i = 0;
int j = 0;
boolean found = false;
// iterate through the array, looking for the target
outer:
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
if (vec[i][j] == TARGET) {
found = true;
break outer;
}
}
}
if (found) {
System.out.println("Found at " + i + " " + j);
}
}
}
The key point in this example is that break statements can be
labeled, that is, a break can designate a labeled loop. Specifying
"break outer" in the above example terminates the loop labeled
"outer". In other words, the break statement terminates both
loops.
The same idea applies to continue statements, for example:
public class ControlDemo2 {
public static void main(String args[]) {
outer:
for (int i = 1; i <= 3; i++) {
for (int j = 1; j <= 3; j++) {
System.out.println(i + " " + j);
if (i == 2 && j == 2) {
continue outer;
}
}
}
}
}
Output here is:
1 1
1 2
1 3
2 1
2 2
3 1
3 2
3 3
Break statements are normally used in loop and switch statements,
but you can also use them in any labeled block. Here's an example
that illustrates this idea:
public class ControlDemo3 {
// add two numbers together, a >= 0 and b >= 0
// throw IllegalArgumentException if a or b out of range
static int add(int a, int b) {
block1: {
if (a < 0) {
break block1;
}
if (b < 0) {
break block1;
}
return a + b;
}
throw new IllegalArgumentException("a < 0 || b < 0");
}
public static void main(String args[]) {
// legal case
try {
int a = 37;
int b = 47;
int c = add(a, b);
System.out.println(c);
}
catch (IllegalArgumentException e) {
System.err.println(e);
}
// illegal case
try {
int a = 37;
int b = -47;
int c = add(a, b);
System.out.println(c);
}
catch (IllegalArgumentException e) {
System.err.println(e);
}
}
}
In this example there's a block labeled "block1". The program
handles errors by breaking out of the block. If there are no
errors, the program returns normally from within the block.
An error causes an exception to be thrown after the block is
exited. Note in this example that there are other ways of
structuring the code. For example, you might simply say:
if (a < 0 || b < 0) {
throw new IllegalArgumentException("a < 0 || b < 0");
}
return a + b;
Which approach is "correct" depends a lot on the complexity of the
logic, and what style you prefer.
The final example illustrates the case where you'd like to perform
some actions, and then somehow gain control for cleanup processing.
You want to do this whether the actions succeed, fail, or trigger
an exception. This case is sometimes implemented in C/C++ by using
a goto to jump to the end of a function, where there is some
cleanup code.
Here's an example of how you can do this using a Java(tm) program:
public class ControlDemo4 {
// add two numbers together, a >= 0 and b >= 0
// throw IllegalArgumentException if a or b out of range
static int traceadd(int a, int b) {
try {
if (a < 0 || b < 0) {
throw new IllegalArgumentException(
"a < 0 || b < 0");
}
return a + b;
}
finally {
System.out.println("trace: leaving traceadd");
}
}
public static void main(String args[]) {
// legal case
try {
int a = 37;
int b = 47;
int c = traceadd(a, b);
System.out.println(c);
}
catch (IllegalArgumentException e) {
System.err.println(e);
}
// illegal case
try {
int a = 37;
int b = -47;
int c = traceadd(a, b);
System.out.println(c);
}
catch (IllegalArgumentException e) {
System.err.println(e);
}
}
}
This example does program tracing. It prints a message when the
traceadd method exits. The exit can be normal, through the return
statement, or abnormal, through an exception. Using try...finally
(no catch) like this:
try {
statement 1
statement 2
statement 3
...
}
finally {
cleanup
}
is a way to get control for cleanup, no matter what happens in the
try clause.
For further reading, see chapter 14 in "The Java(tm) Language
Specification" by James Gosling, Bill Joy, and Guy Steele
(http://java.sun.com/docs/books/jls/).
. . . . . . . . . . . . . . . . . . . . . . .
May 30, 2000. This issue celebrates the public release of
the Java(tm) 2 Platform, Standard Edition (J2SE) v1.3
for Windows platforms. (J2SE v1.3 for Solaris and Linux
platforms will be available soon.) Today's tips cover two
features that are new in J2SE v1.3: dynamic proxies and timer
classes. These features are discussed below in:
* Using Dynamic Proxies to Layer New Functionality
Over Existing Code
* Using Timers to Run Recurring or Future Tasks
on a Background Thread
You can view this issue of the Tech Tips on the Web at
http://developer.java.sun.com/developer/TechTips/2000/tt0530.html.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING DYNAMIC PROXIES TO LAYER NEW FUNCTIONALITY OVER EXISTING CODE
Dynamic proxies allow you to implement new interfaces at runtime
by forwarding all calls to an InvocationHandler. This tip shows
you how to use dynamic proxies to add new capabilities without
modifying existing code.
Consider the following program. The program includes an interface
named Explorer. The interface models the movement of an "explorer"
around a Cartesian grid. The explorer can travel in any compass
direction, and can report its current location. The class
ExplorerImpl is a simple implementation of the Explorer interface.
It uses two integer values to track the explorer's progress around
the grid. The TestExplorer class sends the explorer on 100 random
steps, and then logs the explorer's position.
import java.lang.reflect.Method;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
interface Explorer {
public int getX();
public int getY();
public void goNorth();
public void goSouth();
public void goEast();
public void goWest();
}
class ExplorerImpl implements Explorer {
private int x;
private int y;
public int getX() {return x;}
public int getY() {return y;}
public void goNorth() {y++;}
public void goSouth() {y--;}
public void goEast() {x++;}
public void goWest() {x--;}
}
public class TestExplorer {
public static void test(Explorer e) {
for (int n=0; n<100; n++) {
switch ((int)(Math.random() * 4)) {
case 0: e.goNorth(); break;
case 1: e.goSouth(); break;
case 2: e.goEast(); break;
case 3: e.goWest(); break;
}
}
System.out.println("Explorer ended at "
+ e.getX() + "," + e.getY());
}
public static void main(String[] args) {
Explorer e = new ExplorerImpl();
test(e);
}
}
Try running the TestExplorer class. You should get one line of
output, similar to this:
Explorer ended at -2,8
Now, imagine that the requirements for the system change, and you
need to log the explorer's movement at each step. Because the
client programmed against an interface, this is straightforward;
you could simply create a LoggedExplorer wrapper class that logs
each method call before delegating to the original Explorer
implementation. This is a nice solution because it does not require
any changes to ExplorerImpl. Here's the new LoggingExplorer wrapper
class:
class LoggingExplorer implements Explorer {
Explorer realExplorer;
public LoggingExplorer(Explorer realExplorer) {
this.realExplorer = realExplorer;
}
public int getX() {
return realExplorer.getX();
}
public int getY() {
return realExplorer.getY();
}
public void goNorth() {
System.out.println("goNorth");
realExplorer.goNorth();
}
public void goSouth() {
System.out.println("goSouth");
realExplorer.goSouth();
}
public void goEast() {
System.out.println("goEast");
realExplorer.goEast();
}
public void goWest() {
System.out.println("goWest");
realExplorer.goWest();
}
}
The LoggingExplorer class delegates to an underlying realExplorer
interface, which allows you to add logging to any existing Explorer
implementation. The only change clients of the Explorer interface
need to make is to construct the LoggingExplorer so that it wraps
the Explorer interface. To do this, modify TestExplorer's main
method as follows:
public static void main(String[] args) {
Explorer real = new ExplorerImpl();
Explorer wrapper = new LoggingExplorer(real);
test(wrapper);
}
Now your output should be similar to
goWest
goNorth
...several of these...
goWest
goNorth
Explorer ended at 2,2
By delegating to an underlying interface, you added a new layer of
function without changing the ExplorerImpl code. And you did it
with only a trivial change to the test client.
The LoggingExplorer wrapper class is a good start to using
delegation, but this "by-hand" approach has two major drawbacks.
First, it's tedious. Each individual method of the Explorer
interface must be reimplemented in the LoggingExplorer
implementation. The second drawback is that the problem (that is,
logging) is generic, but the solution is not. If you want to log
some other interface, you need to write a separate wrapper class.
The Dynamic Proxy Class API can solve both of these problems.
A dynamic proxy is a special class created at runtime by the
Java(tm) virtual machine*. You can request a proxy class that
implements any interface, or even a group of interfaces, by
calling:
Proxy.newProxyInstance(ClassLoader classLoaderToUse,
Class[] interfacesToImplement,
InvocationHandler objToDelegateTo)
The JVM manufactures a new class that implements the interfaces you
request, forwarding all calls to InvocationHandler's single method:
public Object invoke(Object proxy, Method meth, Object[] args)
throws Throwable;
All you have to do is implement the invoke method in a class that
implements the InvocationHandler interface. The Proxy class then
forwards all calls to you.
Let's make this work for the Explorer interface. Replace the
LoggingExplorer wrapper class with the following Logger class.
class Logger implements InvocationHandler {
private Object delegate;
public Logger(Object o) {
delegate = o;
}
public Object invoke(Object proxy, Method meth, Object[] args)
throws Throwable {
System.out.println(meth.getName());
return meth.invoke(delegate, args);
}
}
This implementation of the invoke method can log any method call on
any interface. It uses reflective invocation on the Method object
to delegate to the real object.
Now modify the TestExplorer main method as follows to create
a dynamic proxy class:
public static void main(String[] args) {
Explorer real = new ExplorerImpl();
Explorer wrapper = (Explorer) Proxy.newProxyInstance(
Thread.currentThread().getContextClassLoader(),
new Class[] {Explorer.class},
new Logger(real));
test(wrapper);
}
The static method Proxy.newProxyInstance creates a new proxy
that implements the array of interfaces passed as its second
parameter. In this example, the proxy implements the Explorer
interface. All invocations of Explorer methods are then handed off
to the InvocationHandler that is passed as the third parameter.
Try running the updated code. You should see that each step of the
Explorer is logged to System.out.
The dynamic proxy class solves both of the problems of the
"by-hand" approach. There is no tedious copying and pasting of
methods because invoke can handle all methods. Also, the logger
presented here can be used to log calls to any interface in the
Java(tm) language. Try inserting some loggers in your own code to
trace program flow.
Notice that the logging operation is method generic, that is,
logging does not require any decision making based on the
specifics of the method being called. Dynamic proxies are at their
best when adding method-generic services. Logging is one area
where dynamic proxies can be used to advantage; others include
generic stubs for RMI, automatic parameter validation, transaction
enlistment, authentication and access control, and rule-based
parameter, modification, and error handling.
Dynamic proxies, like all reflective code, are somewhat slower than
"normal" code. In many situations this performance difference is
not crucial. If you want to evaluate the performance of dynamic
proxies for delegation, download the benchmarking code from
http://staff.develop.com/halloway/JavaTools.html and execute the
TimeMethodInvoke.cmd script. This script measures times for various
styles of method invocation in the Java language.
For more info on dynamic proxies, see
http://java.sun.com/j2se/1.3/docs/guide/reflection/proxy.html.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
USING TIMERS TO RUN RECURRING OR FUTURE TASKS ON A BACKGROUND THREAD
Many applications need to schedule tasks for future execution, or
to schedule them to recur at a regular interval. J2SE v1.3 meets
this need with the addition of two Timer classes: java.util.Timer
and java.util.TimerTask. This tip demonstrates various scheduling
strategies for using these Timer classes. The tip also shows you
how to handle poorly-behaved tasks, that is, tasks that run too
long or that crash.
The java.util.Timer and java.util.TimerTask classes are simple to
use. As with many things threaded, the TimerTask class implements
the Runnable interface. To use the class, simply write a subclass
with a run method that does the work; then plug the subclass into
a Timer instance. Here's an example:
import java.util.*;
import java.io.*;
public class TestTimers {
public static void doMain() throws Exception {
Timer t = new Timer(true);
t.schedule(new Ping("Fixed delay"), 0, 1000);
Thread.currentThread().sleep(12000);
}
public static void main(String[] args) {
try {
doMain();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
class Ping extends TimerTask {
private String name;
public Ping(String name) {
this.name = name;
}
public void run() {
System.out.println(name + " Ping at " + new Date());
}
}
The class TestTimers creates a Timer. By passing the Timer the
boolean value true, TestTimers forces the Timer to use a daemon
thread. The main thread then sleeps, allowing you see the Timer
at work. However you never actually see any thread classes or
instances; those details are encapsulated by the Timer class.
In the statement,
t.schedule(new Ping("Fixed delay"), 0, 1000);
the parameters to the schedule method cause a Ping object's run
method to be invoked after an initial delay of 0 milliseconds;
the method is repeatedly invoked every 1000 milliseconds. Ping's
run method logs output to System.out. (In your own applications,
you would use the run method to do something more interesting.)
If you run TestTimers, you will see output similar to this:
Fixed delay ping at Thu May 18 14:18:56 EDT 2000
Fixed delay ping at Thu May 18 14:18:57 EDT 2000
Fixed delay ping at Thu May 18 14:18:58 EDT 2000
Fixed delay ping at Thu May 18 14:18:59 EDT 2000
Fixed delay ping at Thu May 18 14:19:00 EDT 2000
Fixed delay ping at Thu May 18 14:19:01 EDT 2000
Fixed delay ping at Thu May 18 14:19:02 EDT 2000
Fixed delay ping at Thu May 18 14:19:03 EDT 2000
Fixed delay ping at Thu May 18 14:19:04 EDT 2000
Fixed delay ping at Thu May 18 14:19:05 EDT 2000
Fixed delay ping at Thu May 18 14:19:06 EDT 2000
Fixed delay ping at Thu May 18 14:19:07 EDT 2000
Fixed delay ping at Thu May 18 14:19:08 EDT 2000
The output confirms that Ping is running about once per second,
exactly as requested. Better still, the Timer can handle multiple
TimerTasks, each with different start times and repeat periods.
This leads to an interesting question: If a TimerTask takes a very
long time to complete, will other tasks in the list be thrown off?
To answer this question, you need to understand how the Timer uses
threads. Each Timer instance has a single dedicated thread that
all the TimerTasks share. So, if one task takes a long time, all
the other tasks wait for it to complete. Consider this long-running
task:
class PainstakinglySlowTask extends TimerTask {
public void run() {
//simulate some very slow activity by sleeping
try {
Thread.currentThread().sleep(6000);
System.out.println("Painstaking task ran at " + new Date());
}
catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
}
The PainstakinglySlowTask class sleeps for six full seconds. It
prevents any other tasks from executing during that time. What
happens if you add the painstakingly slow task to TestTimers?
Let's see.
public static void doMain() throws Exception {
Timer t = new Timer(true);
t.schedule(new Ping("Fixed delay"), 0, 1000);
t.schedule(new PainstakinglySlowTask(), 2000);
Thread.currentThread().sleep(12000);
}
If you recompile and run TestTimers, you will see output like this:
Fixed delay Ping at Thu May 18 15:41:33 EDT 2000
Fixed delay Ping at Thu May 18 15:41:34 EDT 2000
Fixed delay Ping at Thu May 18 15:41:35 EDT 2000
Painstaking task ran at Thu May 18 15:41:41 EDT 2000
Fixed delay Ping at Thu May 18 15:41:41 EDT 2000
Fixed delay Ping at Thu May 18 15:41:42 EDT 2000
Fixed delay Ping at Thu May 18 15:41:43 EDT 2000
Fixed delay Ping at Thu May 18 15:41:44 EDT 2000
Fixed delay Ping at Thu May 18 15:41:45 EDT 2000
During the time that PainstakinglySlowTask runs (from 15:41:35
to 15:41:41), no pings occur. This is what is meant by a "fixed
delay". The Timer tries to make the delay between Pings as
precise as possible, even if that means that some Pings are lost
during the running time of another, long-running task.
A scheduling alternative is "fixed rate." With fixed rate
scheduling, the Timer tries to make the processing rate as
accurate as possible over time. So, if one task runs for a long
time, other tasks can instantaneously run several times in order
to catch up. You can specify fixed rate scheduling by using the
scheduleAtFixedRate method:
//commented out the fixed delay version
//t.schedule(new Ping("Fixed delay"), 0, 1000);
t.scheduleAtFixedRate(new Ping("Fixed rate"), 0, 1000);
t.schedule(new PainstakinglySlowTask(), 2000);
If you run TestTimers with a fixed rate ping, you should
see output like this:
Fixed rate Ping at Thu May 18 15:48:33 EDT 2000
Fixed rate Ping at Thu May 18 15:48:34 EDT 2000
Fixed rate Ping at Thu May 18 15:48:35 EDT 2000
Painstaking task ran at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:41 EDT 2000
Fixed rate Ping at Thu May 18 15:48:42 EDT 2000
Fixed rate Ping at Thu May 18 15:48:43 EDT 2000
Fixed rate Ping at Thu May 18 15:48:44 EDT 2000
Fixed rate Ping at Thu May 18 15:48:45 EDT 2000
This time, several Pings run right after PainstakinglySlowTask
finishes; the Pings all run at 15:48:41. This keeps the rate of
Pings as close as possible to the desired 1000 msec average. The
price paid is occasionally having Pings run at approximately the
same time.
Both fixed-rate and fixed-delay scheduling have their uses.
However, neither totally eliminates the interference caused by
long-running tasks. If you have different tasks that might run
for a very long time, you might want to minimize the interference
between the tasks. This is especially true if you need to take
advantage of multiple CPUs. A single Timer provides no obvious
way to do this. You cannot control the Timer thread, because it
is encapsulated as a private field of the Timer class. Instead,
you can create multiple Timers, or have one Timer call notify()
and have other threads do the actual work.
Tasks that throw exceptions pose more of a problem than
long-running tasks. Here's an example. Replace the
PainstakinglySlowTask class with the following CrashingTask class:
class CrashingTask extends TimerTask {
public void run() {
throw new Error("CrashingTask");
}
}
//new version of TestTimers
public static void doMain() throws Exception {
Timer t = new Timer(true);
t.scheduleAtFixedRate(new Ping("Fixed rate"), 0, 1000);
t.schedule(new CrashingTask(), 5000, 1000);
Thread.currentThread().sleep(12000);
}
If you run TestTimers with CrashingTask, you should see output
that looks something like this:
Fixed rate Ping at Thu May 18 15:58:53 EDT 2000
Fixed rate Ping at Thu May 18 15:58:54 EDT 2000
Fixed rate Ping at Thu May 18 15:58:55 EDT 2000
Fixed rate Ping at Thu May 18 15:58:56 EDT 2000
Fixed rate Ping at Thu May 18 15:58:57 EDT 2000
Fixed rate Ping at Thu May 18 15:58:58 EDT 2000
java.lang.Error: CrashingTask
at CrashingTask.run(TestTimers.java:37)
at java.util.TimerThread.mainLoop(Timer.java:435)
at java.util.TimerThread.run(Timer.java:385)
After CrashingTask throws an exception, it never runs again. This
should come as no surprise. What may surprise you is that no other
task on the same Timer will run again, either. A wayward
exception will cancel the Timer, causing any future attempt to
schedule a task to throw an exception. However, there is no
mechanism to notify your existing tasks that they have been
brutally de-scheduled. It is up to you to make sure that errant
TimerTasks do not destroy your Timers. One strategy is to guarantee
that your TimerTasks never throw exceptions back into the Timer.
You can do this by enclosing the TimerTasks in a try block that
catches the exception. If you need to be notified of the exception,
you can create a simple mechanism to notify the program that
a failure occurred. Here's an example:
import java.util.*;
import java.io.*;
interface ExceptionListener {
public void exceptionOccurred(Throwable t);
}
class ExceptionLogger implements ExceptionListener {
public void exceptionOccurred(Throwable t) {
System.err.println("Exception on Timer thread!");
t.printStackTrace();
}
}
public class TestTimers {
public static void doMain() throws Exception {
Timer t = new Timer(true);
//t.schedule(new Ping("Fixed delay"), 0, 1000);
t.scheduleAtFixedRate(new Ping("Fixed rate"), 0, 1000);
t.schedule(new CrashingTask(new ExceptionLogger()), 5000, 5000);
//t.schedule(new PainstakinglySlowTask(), 2000);
Thread.currentThread().sleep(12000);
}
public static void main(String[] args) {
try {
doMain();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
class Ping extends TimerTask {
private String name;
public Ping(String name) {
this.name = name;
}
public void run() {
System.out.println(name + " Ping at " + new Date());
}
}
class CrashingTask extends TimerTask {
ExceptionListener el;
public CrashingTask(ExceptionListener el) {
this.el = el;
}
public void run() {
try {
throw new Error("CrashingTask");
}
catch (Throwable t) {
cancel();
el.exceptionOccurred(t);
}
}
}
This code is very similar to the previous version, except that this
time CrashingTask's run method never propagates exceptions of any
type. Instead, it uses a catch block to catch all Throwables and
then uses a callback interface to report the exception. Here's the
output:
Fixed rate Ping at Thu May 18 16:41:03 EDT 2000
Fixed rate Ping at Thu May 18 16:41:04 EDT 2000
Fixed rate Ping at Thu May 18 16:41:05 EDT 2000
Fixed rate Ping at Thu May 18 16:41:06 EDT 2000
Fixed rate Ping at Thu May 18 16:41:07 EDT 2000
Fixed rate Ping at Thu May 18 16:41:08 EDT 2000
Exception on Timer thread!
java.lang.Error: CrashingTask
at CrashingTask.run(TestTimers.java:54)
at java.util.TimerThread.mainLoop(Timer.java:435)
at java.util.TimerThread.run(Timer.java:385)
Fixed rate Ping at Thu May 18 16:41:09 EDT 2000
Fixed rate Ping at Thu May 18 16:41:10 EDT 2000
Fixed rate Ping at Thu May 18 16:41:11 EDT 2000
Fixed rate Ping at Thu May 18 16:41:12 EDT 2000
Fixed rate Ping at Thu May 18 16:41:13 EDT 2000
Fixed rate Ping at Thu May 18 16:41:14 EDT 2000
Fixed rate Ping at Thu May 18 16:41:15 EDT 2000
When CrashingTask throws an exception, it calls cancel on itself
to remove itself from the Timer. It then logs the exception by
calling an implementation of the ExceptionListener interface.
Because the exception never propagates back into the Timer thread,
the Pings continue to function even after CrashingTask fails. In
a production system, a more robust implementation of
ExceptionListener could take action to deal with the exception
instead of simply logging it.
There is another Timer class in the Java Platform,
javax.swing.Timer. Which Timer should you use? The Swing Timer is
designed for a very specific purpose. It does work on the AWT event
thread. Because much of the Swing package code must execute on the
AWT event thread, you should use the Swing Timer if you are
manipulating the user interface. For other tasks, use the
java.util.Timer for its flexible scheduling.
For more info on the Timer classes, see
http://java.sun.com/j2se/1.3/docs/api/java/util/Timer.html.
. . . . . . . . . . . . . . . . . . . . . . .
- NOTE
Sun respects your online time and privacy. The Java Developer
Connection mailing lists are used for internal Sun Microsystems(tm)
purposes only. You have received this email because you elected
to subscribe. To unsubscribe, go to the Subscriptions page
(http://developer.java.sun.com/subscription/), uncheck the
appropriate checkbox, and click the Update button.