Differences

This shows you the differences between two versions of the page.

Link to this comparison view

cnn [2016/08/06 09:13]
xfel created
cnn [2016/08/07 07:36] (current)
zhiyuan1
Line 5: Line 5:
 ====  Templates Creation and Data Preparation ==== ====  Templates Creation and Data Preparation ====
 Suppose initial experiment data are stored in a hdf5 file, say ''​data.h5''​. Within it lies a single 3-dimensional dataset ''​patterns''​. Suppose initial experiment data are stored in a hdf5 file, say ''​data.h5''​. Within it lies a single 3-dimensional dataset ''​patterns''​.
-<​code>​+<​code ​python>
 In [2]: f = h5py.File('​data.h5',​ '​r'​) In [2]: f = h5py.File('​data.h5',​ '​r'​)
 In [3]: list(f.items()) ​ In [3]: list(f.items()) ​
Line 15: Line 15:
  
 Suppose ​ about 200 single-hit patterns are selected, and are stored in a hdf5 dataset ''​clean_single_hits''​. The next step is to generate more templates of sinlge-hit patterns to compare with the rest of the whole dataset. This can be achieved with some basic python routines (note again that codes here are not optimized, and one can improve them for better performance) : Suppose ​ about 200 single-hit patterns are selected, and are stored in a hdf5 dataset ''​clean_single_hits''​. The next step is to generate more templates of sinlge-hit patterns to compare with the rest of the whole dataset. This can be achieved with some basic python routines (note again that codes here are not optimized, and one can improve them for better performance) :
-<​code>​+<​code ​python>
 import h5py import h5py
 import numpy as np import numpy as np
Line 74: Line 74:
 not exactly "​deep",​ but it should be enough for a relatively small dataset like this. not exactly "​deep",​ but it should be enough for a relatively small dataset like this.
 To build the network I also used Keras, which is a higher-level api built upon both Theano and Tensorflow for cleaner code structure. **Tensorflow is the preferred backend since it has better multi-gpu support, but on an operating system as old as CentOS 6, it is almost impossible to get it installed.** To build the network I also used Keras, which is a higher-level api built upon both Theano and Tensorflow for cleaner code structure. **Tensorflow is the preferred backend since it has better multi-gpu support, but on an operating system as old as CentOS 6, it is almost impossible to get it installed.**
-<​code>​+<​code ​python>
 from keras.models import Sequential from keras.models import Sequential
 from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing.image import ImageDataGenerator
Line 133: Line 133:
  
 My approach uses images stored in different directories:​ My approach uses images stored in different directories:​
-<​code>​+<​code ​python>
 # import data                                                                                    # import data                                                                                   
 train_generator = train_datagen.flow_from_directory( train_generator = train_datagen.flow_from_directory(
Line 148: Line 148:
  </​code>​  </​code>​
 Then, we can train our model and save the trained weights: Then, we can train our model and save the trained weights:
-<​code>​+<​code ​python>
 model.fit_generator( model.fit_generator(
     train_generator,​     train_generator,​
Line 163: Line 163:
  
 You can now use the model and saved weights to make predictions on new experiment data, and save the prediction labels: You can now use the model and saved weights to make predictions on new experiment data, and save the prediction labels:
-<​code>​+<​code ​python>
 # suppose y is a numpy array with dimension [number_of_samples,​ color_channels,​ height, width] # suppose y is a numpy array with dimension [number_of_samples,​ color_channels,​ height, width]
 # if using Tensorflow backend, it should be  [number_of_samples,​ height, width, ​ color_channels] # if using Tensorflow backend, it should be  [number_of_samples,​ height, width, ​ color_channels]

Richard Feynman

“everything that is living can be understood in terms of the jiggling and wiggling of atoms”.

and now, we want to watch atoms jiggling and wiggling.

X-rays, electrons, fluorescence light, the advances of photon sciences, together with computational modeling, are making this happen.