Most of these suggestions apply only after a bottleneck has been identified:
Test using either the target size for collections or, if this is not definite, various sizes of collections.
Test updating collections using the expected order of the data or, if this is not definite, various orders of data, including sorted data.
Match the scaling characteristics of the structures against the volumes of data likely to be applied.
ArrayList is probably a better default list than LinkedList.
Presize collections to their final sizes when possible.
Test for the RandomAccess interface where applicable for faster list iteration
Consider switching to alternative data structures or algorithms.
Use the most appropriate collection class available.
Consider using two collections with different performance characteristics to hold the same data.
Consider using plain arrays, e.g., int[ ], Object[ ].
Consider using hybrid data structures.
Use specialized collections that avoid casts or unnecessary method calls.
Consider wrapping the elements of the collection in a specialized class that improves access times (e.g., Hashtable key class with faster hashCode( ) and equals( ) methods).
Add caching or specialized accessors to collections where some elements are accessed more often than others.
Access the underlying collection structure when iterating over the elements of a collection.
Copy the elements into an array rather than access many elements one at a time through the collection element accessors.
Preallocate memory for element storage rather than allocating at update time.
Reuse element-storage objects when they are released.
Map elements into one or more arrays of data variables to allocate the whole collection in as few calls as possible.
Test if changing a recursive algorithm to an iterative one provides a useful speedup.
Make recursive methods tail-recursive.