Non-residential building occupancy modeling. Part II. Occupancy classification
So dataset was taken from this place. The dataset comprised of different sources: surveys, data logging from sensors, weather, environment variables.
Total feature list consist of 118 features and can be grouped as general (occupancy, time), environment (indoor, outdoor), personal characteristics (age, office type, accepted sensation range etc), comfort/productivity/satisfaction, behavior (clothing, window, interaction with thermostat etc ), personal values (choices on different set points). It contains data on 24 occupants whether it private office or joint one, the first task is to implement binary classification of each occupant using some input data from sensors and time.
For rapid protoyping I will use python Tensor Flow wrapper Keras along Anaconda framework.
- First, loading all required libraries
from keras.models import Sequential from keras.layers import Dense import numpy import matplotlib.pyplot as plt # fix random seed for reproducibility numpy.random.seed(15)
In my case I took Time, Temperature, CO2 input values
# load pima indians dataset dataset = numpy.loadtxt("sample.csv", delimiter=",") # split into input (X) and output (Y) variables X = dataset[:,0:3] # Take 3 columns from 0 to 2 as inputs Y = dataset[:,3] #Third index or fourth column from input file is occupancy value 0 or 1 # either present or absent
# create model model = Sequential()# In this example, we will use a fully-connected network structure with three layers model.add(Dense(7, input_dim=7, activation='tanh'))# First layer 7 neurons, 8 inputs, rectifier activation func model.add(Dense(3, activation='tanh'))# hidden layer model.add(Dense(1, activation='sigmoid'))
# Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # We can train or fit our model on our loaded data by calling the fit() function on the model. # Fit the model or Model Training model.fit(X, Y, epochs=150, batch_size=4) ''' In this case we are fitting the whole our data (X,Y) and running it for 150 iterations. However if you want to split your data into training and validation parts you can write: model.fit(X, Y,validation_split=0.33, epochs=150, batch_size=10) thus, model will split data into 77% training, 33% validation ''' # calculate predictions predictions = model.predict(X) rounded = [round(x[0]) for x in predictions]
Great post i must say and thanks for the information. Education is definitely a sticky subject. However, is still among the leading topics of our time. I appreciate your post and look forward to more. google ads promo code
ReplyDeleteLearn the latest technology in Artificial Intelligence & Machine Learning in real-time with the top Machine Learning Training in Hyderabad program offered by AI Patasala.
ReplyDeleteOnline Machine Learning Training in Hyderabad