The motivation of doing automated mapping

I had the meeting with Pine, discussing the motivation of my proposal, which brings me to rethink about the true motivation of what I am doing, and why is it a contribution.

For the general framework of using data-driven approach to infer meta-data from sensor measurements, I need to show 1> why is this necessary, why will the automated approach better than manually recording? 2> why using data-driven approach instead of other approaches(such as text mining)?

The first question should be addressed in two folds. It’s better in terms of efficiency. Statistics should support that manually recording is time-consuming and error-prone, while the data-driven approach is fast even including the time to collect data. It’s also better in terms of usability. The data-driven approach is trying to map the data into a common name space (e.g., Haystack), while the tag is often named in different ways and hard to interpret and understand. Although the tag naming at the initial step could be based on certain standard scheme, the knowledge required for installers complicate the process of the installation (also need proof?). For the existed buildings with tags being labelled already, it’s also cost expensive to remap the tags to a common namespace. (some examples from me to let people know the time and effort to do the conversion.)

The lack of usability in current tags impede many applications built on BAS to facilitate energy efficiency goals in buildings. examples with statistics? I intuitively think the different naming strategies across buildings or even inside a building largely hinders the generalizability of applications, for example, certain application needs to monitor the temperature in different zone of the building, yet the tag is labelled as “T”, “tp”, “temp”, “Temperature”, and it’s time consuming for people to consider all acronyms for certain measurement. Some examples could be shown from BAS tags used in CMU campus buildings. (Lack of standard labelling code)

‘DOH.AHU.001.CCO’ ‘DOH.AHU.001.CCV’ ‘DOH.AHU.001.DBS’
‘pipe:sine2t’ ‘pitot5′ ‘RM836AEVP.PRESENT_VALUE’
‘MI.AHU.3FL.011.CCO’ ‘MI.AHU.3FL.011.CCT’ ‘MI.AHU.3FL.011.CWR’

As we can see, the problem lies in that there is no standard code to regularize the naming scheme. Despite the efforts from Haystack, IFC, Semantic Sensor web, no one-fit-all solution exists and people are still not using it(why?). It naturally leads to the second question, how can we get usable information without such standard naming code for tags? Two common approaches include IFT and IFD. Why should we use data-driven approach IFD or why should we use the combination of both? Even for IFT/IFD? What are we inferring, do we have a good name space for the information we are inferring? Why cannot we use this name space during the initial installation / manual recording? Besides, is the information inferred from data more complete and structured than that from tags? By looking at the history of data, we may be able to detect change behavior. (how to sustain this?)

Let’s go through some papers discussing the necessity of automated approach.

  • W. Jones et. al.: Critical Information for First Responders, Whenever and Wherever it is Needed , 2001
    The type of information in buildings is essential to improve the effectiveness of fire fighting operations and safety of the crews. One of the challenge is to interpret sensor signals to know what the environment being detected is.
  • L. Luskay et. al.: Methods for Automated and Continuous Commissioning of Building Systems , 2003
    The lack of commissioning of buildings lead to lower levels of equipment availability and greater occupant dissatisfaction. Methods for automated commissioning in building systems should be investigated.
  • M. Brambley et. al. : Advanced sensors and controls for building applications: Market assessment and potential R & D pathways , 2005
    A major barrier impeding the BAS energy savings is the hardware problem of input devices, sensors, transducers, wiring.
  • N. Dawes et. al. : Sensor Metadata Management and Its Application in Collaborative Environmental Research 2008
    Metadata management in e-science domain.
  • X. Liu, B. Akinci : Requirements and Evaluation of Standards for Integration of Sensor Data with Building Information Models , 2009
    The need to understand the context of the sensors to analyze the condition of facilities. Sensor readings alone don’t provide support for rich analysis, facility managers often need topological structure of sensors, location information, function and etc. to perform maintenance tasks.
  • J. Butler : Point naming standards , 2010
    Point name standards: 1> building 2> category 3> equipment type 4> space type 5> point type. Well-chosen point names can provide useful information about installed systems to the people responsible for maintaining, modifying, and interconnecting various building systems. As well, software that performs automated analysis of HVAC system performance may benefit from the consistent
    application of a point naming standard.
  • J. Lu, K. Whiteman : Smart blueprints: automatically generated maps of homes and the devices within them , 2012
    generate a map of home using light and motion sensors, avoid the complicated configuration process. The occupant can just buy off-the-shelf sensors and let the system auto-configure itself.
  • Jean-Paul Calbimonte et. al.: Deriving Semantic Sensor Metadata from Raw Measurements , 2012
    The meta-data for sensor types is not always complete and coherent, as an example, to indicate a sensor measurement for temperature, different sensors use various tags like ‘T’, ‘tp’, ‘temp’, ‘temperature’, ‘mstemperature’ and etc.

Collections of good posts

Concepts

L0 Norm, L1 norm, L\infty norm
A very good introduction to HMM
Intro to Reproducing Kernel Hilbert Spaces
Kalman and Bayesian Filters in Python

Cheat Sheet

Matrix Cookbook
Machine Learning Cheat Sheet
Linear Algebra Explained in Four pages
Git
Vim

Statistics

Statistical Hypothesis Testing
p-value

Useful Sources for Data Science

Data Science Tutorials
Open Source Data Science Masters
Foundations of Data Science
What data scienctists should know (Quora)
Data structures and algorithms

Blogs

The little Genius: Christopher Olah
Deep Learning Net Blogs
Larry Wasserman

Miscellaneous

Google Coding Style Guides

sparse autoencoder

Andrew Ng’s tutorial:

http://nlp.stanford.edu/~socherr/sparseAutoencoder_2011new.pdf

Convolution and pooling to extract features:

http://ufldl.stanford.edu/wiki/index.php/Feature_extraction_using_convolution

http://ufldl.stanford.edu/wiki/index.php/Pooling
One paper, I am trying to do the similar things but for power instead of images:

http://ai.stanford.edu/~ang/papers/nipsdlufl10-AnalysisSingleLayerUnsupervisedFeatureLearning.pdf

Some interesting pictures from Tracebased

I would like to show the accuracy of prediction the appliance type with different algorithms. All of them are based on 5-fold CV, which means 80% training data and 20% test data. To test with different classifiers, I wrote a function including seven most common classifiers with parameters to adjust so far, including Gaussian naive bayes, k-nearest neighbours, logistic regression as a classifier, discriminant analysis classifier, support vector machine, decision tree, neural network.

First pic is generated by following the idea of the authors, I extracted 14 significant features as they suggested. I ran the features and labels on Weka with Random Committee and got an accuracy about 90%. Running with more other algorithms, we found one interesting thing. All of the algorithms that can give a high accuracy are all tree-based entropy related methods. For the consistency of analysis(since I process all the data with Matlab), I ran the classification algorithms in Matlab with 7 classifiers.

As we expect, the decision tree provides a relatively fine accuracy(around 80%) and others are pretty bad in accuracy. kNN and discriminant analysis can give an accuracy around 50%.

Next three pics is the classification result based on the features generated by autoencoders. The big idea is like this. We have the initial input data (1836*86400) representing 86400 points one day for each appliance(1836 appliances, some appliances have been measured for multiple days). Then we want to learn the representative features from the data for each appliance. So we pick the patches randomly from 86400 initial features and 1836 samples. We can finally get a m*n patch matrix.(m is the number of randomly picking times, n is the length of segments you pick from 86400 features.) Having the patches, which can be thought as m features and n samples, we let the hidden units to be 100 compared to m and then run the sparse autoencoder. Then we get the weigh matrix. Using the weight matrix to do the covolving and pooling step, we get the new features from atuoencoders. Then we run the classifers again to get the pics below. With different parameters during autoencoder procedure, the shape is a little bit different but they all look pretty similar.

This one is 600*600*100(feature size*samples*hidden units) with convolved size being 10.

This one is 600*60000*100(feature size*samples*hidden units) with convolved size being 10.

This one is 600*60000*100(feature size*samples*hidden units) with convolved size being 1.

It’s expected I am going to get a very low accuracy since many details haven’t been handled nicely. Like how to choose feature size, samples and hidden units? How to decide the size during covolving and pooling? etc.  I may try different combinations to see the result but it does take a long time if I increase those numbers. Let’s discuss then.

Brief Updates about Tracebase

Based on the reply from the author of that paper, I was trying to extract 15 features from the raw data. Some of them are easy to extract, like average power level for complete day, energy consumption between 9:00pm and 9:10pm. However, most of them are not so easy to be extracted, like the ones to be extracted from certain “activity phase”.

Activity phases, as the authors suggested, are segments from any trace. The visual representation can be seen below(from this paper):

However, this turns out to be a non-trivial data processing step. Interpolation is fine but the preprocessing to remove very short power consumption bursts and eliminates outliers will be a little bit painful. The next segmentation step is even harder to realize. Of course it looks easier for the dataset in the figure, but if you click the link below and take a look at what the data really looks like, you will find it’s not that simple. Too many parameters need to be considered if we want to do the preprocessing and segmentation.

What does data look like

Let’s discuss tomorrow to see if we want to devote time on preprocessing it.(Or ask Suman for the guy who did the “dirty” work for the tracebase paper.)

Also, regarding another dataset we collected, I have realized function we discussed this morning basically, though I need to discuss with you several details, including

1. How to handle the noises in the voltage? So far I am providing the raw data without filtering. Suman suggested we either discard the ones with noise or filter those and make sure the phase is the same. I would prefer to filter those noises but you seem to think those “noises” might contain some information.

2. The transient extracted is not so well-looking. Now I am using the maximum value of first derivative to find the transient and using the max value of power to center the transient. Are there any other options?

3. I have a “desire” to try a bunch of supervised methods on the data itself to see the result. There is a python package called Scikit which looks pretty cool. See below for a cheating sheet provided by Scikit. What features do you recommend me to try first?

Brief Guideline to run firefly plug meter/gateway

Please do use nano-rk version 1669! 

Other version may not be applicable.

The website : http://nanork.org/projects/nanork/wiki/Quick-Start

For gateway(the one with arduino as the bootloader, port: /dev/ttyUSB0):

1. Set up fuses (can be ignored basically)

This is only required for the new device that has never been set fuses before.

cd nano-RK/tools/fuse-conf/firefly3_x/
sudo ./ff-set-fuses-bootloader

(There are six command line tools which can be used to set fuses. While actually this step is not necessary for gateways, since all the gateways have been set the fuses already and been installed the bootloader. A programer is only required when the device doesn’t have a gateway or just installed a bootloader. At this stage, a programmer will be used to set up fuses as well as initialize the bootloader.)

2. Flash the gateway

cd nano-RK/projects/demos/pcf-demo/pcf-host/
make clean
sudo make program

(sudo avrdude -b 57600 -F -p atmega128 -P /dev/ttyUSB0  -c arduino -V -U flash:w:main.hex

The code above is the essential part. Basically, we are using a command line tool avrdude to flash the program main.hex into the chip atmega128 through the “no-programmer”, which is the bootloader arduino. Also, the chip option(-p) should be atmega128rfa1, while actually atmega128 would also work, so people still use the previous version while I recommend using the new one.)

3. Set up EEPROM, including mac addresses and channel

cd nano-RK/tools/EEPROM_mac_set
sudo ./config-eeprom /dev/ttyUSB0 00000000 25 -e 00112233445566778899AABBCCDDEEFF -v FF3

(00000000 is the mac address of the gateway [required to be 00000000], 25 is the channel [required to be the same with the channel of plug meter], -e provides the aes-key and -v provides the firefly version. -e & -v could be optional)

For plug meter(the one with AVRISP MKII as the programmer, port: usb), connect the plug meter with the programmer first, plug it into an electric outlet, then:

1. Set up fuses

This is only required for the new device that has never been set fuses before.

cd nano-RK/projects/demos/pcf-demo/ff-clients/plug-client/fuse-conf/
sudo ./plug-meter-fuses-mkII

(only required for the new devices.)

2. Flash the plug meter

cd nano-RK/projects/demos/pcf-demo/ff-clients/plug-client/
make clean
make
chmod +x prog-cmd/prog_mk2.sh
sudo prog-cmd/prog_mk2.sh

Essentially, we are running

avrdude -b 115200 -F -p atmega128 -P usb -c avrispmkII -V -U flash:w:main.hex

3.  Set up EEPROM, including mac addresses and channel

cd nano-RK/tools/EEPROM_mac_set
sudo ./config-eeprom usb 00000001 25 -v MK2

(mac address should be different from gateway’s, chosen from 1-63 and unique on the network (according to Patrick), channel needs to be the same as gateway’s. Remember to set the right firefly/programmer version.)

Running the devices to collect power data:

For the gateway, the server

cd nano-RK/tools/SLIPstream/SLIPstream-server/
make clean
make
sudo ./SLIPstream /dev/ttyUSB0 5000

For the client plug meter, to read data

cd nano-RK/projects/demos/pcf-demo/slip-clients/read-data/
make clean
make
sudo ./read-data localhost 5000

Then you should be able to get something like this from the terminal:

P,1391208546,1,37,0,60,0,0,0,0,0,0,0,169,882,494,526,189,780,526,509,495,1

P,1391208547,1,36,1,60,255,8,1985,0,167,39533,10995116277760,169,882,493,526,189,786,526,509,495,1

P,1391208550,1,37,3,60,255,8,1984,1099511627776,168,39545,35184372088832,169,881,492,527,180,789,526,509,495,1

P,1391208551,1,38,4,60,255,8,1966,2199023255552,165,38976,47278999994368,170,881,495,515,343,783,526,509,495,1

( not “cd nano-RK/tools/SLIPstream/SLIPstream-client & sudo ./sample-client localhost 5000”. Remember the port should be the same with gateway servers. To store the data and running on the background, simply sudo ./read-date localhost 5000 > data.txt &)

To control the plug:

cd nano-RK/projects/demos/pcf-demo/slip-clients/plug-ctrl/
make clean
make
./plug-ctrl
./plug-ctrl localhost 5000 0x14 0

Concepts related to hypothesis testing

I have been involved into the concepts of FP/FN, precision, recall and etc recently. Even though I know how to calculate them, I am still curious why should we calculate the value like that. Through reading wiki and thinking, I realize the definition of FP/FN in classification problem is different from how we think about  FP/FN in other general case, where FP/FN is calculated from considering null hypothesis. I am going to explain it starting from hypothesis testing.

Null hypothesis, a proposition undergoes verification to determine if it should be accepted or rejected in favor of an alternative. Often it is expressed as that there is no relationship between two measured phenomena. More generally, it’s tended to regard a proposition as a null hypothesis if it is more likely to accept (reject only with enough reasons), as an alternative hypothesis if it is more likely to reject.

In this case, our null hypothesis is the pipe is intact.

FP:(the doctor diagnosed a pregnant woman is not pregnant.)

Type I error, false positive, the incorrect rejection of a true null hypothesis.

For example, you predict the pipe is damaged while actually the pipe is intact (True null hypothesis: the damage state you predict is uncorrelated with the real state(intact) of the pipe. Rejection of the hypothesis: the damage state you predict indicates the real state of the pipe.). This is FP error.

FN:(the doctor diagnosed a man is pregnant)

Type II error, false negative, failure to reject a false null hypothesis.

For example, you predict the pipe is intact while actually the pipe is damaged(False null hypothesis: the intact state you predict indicates the pipe is exactly intact. Failure to reject false null hypothesis, believe this is true, however it is not. )

TP:

True positive, the correct accept of a true null hypothesis.

For example, you predict the pipe is intact and the pipe is actually intact.

TN:

True negative, the correct rejection of a false null hypothesis.

For example, you predict the pipe is damaged and the pipe is actually damaged.

Another example in which the null hypothesis is people are sick. (In this case, we want to test the effectiveness of certain equipment, we are not likely to reject this, we would like to see the equipment can diagnosis (or predict) if people are sick. )

  • True positive: Sick people correctly diagnosed as sick
  • False positive: Healthy people incorrectly identified as sick
  • True negative: Healthy people correctly identified as healthy
  • False negative: Sick people incorrectly identified as healthy.

In general, Positive = identified and negative = rejected. Therefore:

  • True positive = correctly identified/predicted
  • False positive = incorrectly identified/predicted
  • True negative = correctly rejected
  • False negative = incorrectly rejected
Next, precision, recall, true negative rate and accuracy are defined based on TP,TN, FP,FN:

Precision=\frac{TP}{TP+FP}\\Recall=\frac{TP}{TP+FN}\\True\quad Negative \quad Rate= \frac{TN}{TN+FP}\\Accuracy=\frac{TP+TN}{TP+TN+FP+FN}

Precision (positive predictive value) is the fraction of retrieved instances that are relevant, while recall (sensitivity) is the fraction of relevant instances that are retrieved.

For example, a program for recognizing dogs in scenes from a video identifies 7 dogs in a scene containing 9 dogs and some cats. If 4 of the identifications are correct, but 3 are actually cats, the program’s precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30=2/3 while its recall is 20/60=1/3.

Absence of type I and type II errors corresponds to maximum precision (no FP) and maximum recall (no FN). Precision can be seen as a measure of exactness or quality, whereas recall is a measure of completeness or quantity.

High precision means that an algorithm returned more relevant results than irrelevant, while high recall means that an algorithm returned most of the relevant results.

So many similar stuffs using IoT

Just make a list, I will update it if I find more things.

Commercial Product

Lockitron: Use phone to control the door remotely.

Ninja Blocks: allow anybody to monitor and control their homes remotely over the Internet.

Nest: smart thermostat.

Tado: similar to Nest.

Netatmo: personal weather station, monitor your weather and air quality.

Lapka: monitor humidity, temperature, radiation, EMF and organic.

Platform for IoT:

paraimpu: allow people to connect, use, share and compose Things, Services and Devices to create personalized applications in the Web of Things.

Lelylan: map every device in home to a unique URL which will provide control over it.

evrythng: a software engine that makes physical things smart by connecting them to the web.

Smarter Planet from IBM: to build smart systems in our planet.

Some basics information about IoT

Definition:

IoT refers to uniquely identifiable objects (things) and their virtual representations in Internet-like structure.

Technologies Required:

RFID tags for tracking objects, low-power sensors, low-power actuators.

Addressability of Things:

How to identify objects?

Make all things addressable by existing naming protocols, such as URI.

IPv6 will help the system to identify any kind of object .

Frameworks:

Frameworks from Pachube, focusing on real time data logging solutions, offer some basis to work with many “things” and have them interact.

Future developments might lead to specific Software development environments to create the software to work with the hardware used in the Internet of Things.

Applications:

Alcatel-Lucent touchatag and Violet’s Mirror gadget: link real world items to the online world using RFID tags and QR codes.

Pachube (Connected Environment Ltd): provide data management infrastructure for sensors, devices and envionments.

Nimbits: provides connectivity between devices using data points.

Paraimpu: allow people to connect, use, share and compose things, services, devices to create personalized applications in the field of IoT.

Ninja Blocks: allow anybody to monitor and control their homes remotely over the Internet.

Lelylan:  map every device in home to a unique URL which will provide control over it. When such a connection is done, technological problems disappear and devices become easily identifiable only through their functionalities.

Smarter Planet from IBM: to build smart systems in our planet, kind of similar to IoT.

evrythng: a software engine that makes physical things smart by connecting them to the web.There is one interesting video introducing evrythng: http://vimeo.com/51878487

Issues:

One is the ability to rapidly create IoT applications.

Media and Graphics lab at UBC focuses on a lightweight toolkit for developing IoT applications and targets rapid development using Web technologies and protocols.

The other one is that the huge volumes of data that IoT generate will have to be routed, captured, analyzed and acted upon in timely relevant ways.

Various cloud-based services are emerging that are designed to help manage thess kinds of sensors and data produced. Machine learning technique also needs to be developed in this field.

Envision:

If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost.

In concrete, like

The fridge could reliably order milk before it runs out.

Transport networks broadcast the position of buses, trams and trains and make this data available to the public.

We can use CTRL+F to search the things we want to find.