• Skip to main content
  • Skip to primary sidebar

学習記録

データセットの最初の5行だけ出す

2017年12月31日 by 河副 太智 Leave a Comment

1
2
3
4
5
6
import numpy as np
import pandas as pd
 
iris_dataset = load_iris()
 
print(iris_dataset["data"][:5])

1
2
3
4
5
[[ 5.1  3.5  1.4  0.2]
[ 4.9  3.   1.4  0.2]
[ 4.7  3.2  1.3  0.2]
[ 4.6  3.1  1.5  0.2]
[ 5.   3.6  1.4  0.2]]

Filed Under: Pandas

データセットのダウンロード

2017年12月31日 by 河副 太智 Leave a Comment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
 
 
iris_dataset = load_iris()
 
print(iris_dataset.keys())
#>>>  ['DESCR', 'target_names', 'feature_names', 'target', 'data']
#DESCRはデータセットの解説があるという事
#その他はデータの種類を示している
 
 
print(iris_dataset["DESCR"[:193]]+"\n...")
#データセットの詳細を知る為にDESCRを表示

結果

150のデータと4つのデータの種類設定コード

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
Iris Plants Database
====================
 
Notes
-----
Data Set Characteristics:
    :Number of Instances: 150 (50 in each of three classes)
    :Number of Attributes: 4 numeric, predictive attributes and the class
    :Attribute Information:
        - sepal length in cm
        - sepal width in cm
        - petal length in cm
        - petal width in cm
        - class:
                - Iris-Setosa
                - Iris-Versicolour
                - Iris-Virginica
    :Summary Statistics:
 
    ============== ==== ==== ======= ===== ====================
                    Min  Max   Mean    SD   Class Correlation
    ============== ==== ==== ======= ===== ====================
    sepal length:   4.3  7.9   5.84   0.83    0.7826
    sepal width:    2.0  4.4   3.05   0.43   -0.4194
    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)
    ============== ==== ==== ======= ===== ====================
 
    :Missing Attribute Values: None
    :Class Distribution: 33.3% for each of 3 classes.
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988
 
This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris
 
The famous Iris database, first used by Sir R.A Fisher
 
This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
 
References
----------
   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"
     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
     Mathematical Statistics" (John Wiley, NY, 1950).
   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
     Structure and Classification Rule for Recognition in Partially Exposed
     Environments".  IEEE Transactions on Pattern Analysis and Machine
     Intelligence, Vol. PAMI-2, No. 1, 67-71.
   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
     on Information Theory, May 1972, 431-433.
   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
     conceptual clustering system finds 3 classes in the data.
   - Many, many more ...
 
...

 

targetnameの種類を知りたい場合(アイリスの名称一覧)

1
print(iris_dataset["target_names"])

1
['setosa' 'versicolor' 'virginica']

 

Filed Under: python3, scikit-learn Tagged With: データの種類, 種類一覧

データフレームを綺麗に表示

2017年12月31日 by 河副 太智 Leave a Comment

1
2
3
4
5
6
7
8
9
import pandas as pd
from IPython.display import display
 
data ={"name":["jonh","anna","peter","linda"],
      "location":["new york","paris","berlin","london"],
      "age":[24,13,53,33]}
 
data_pandas = pd.DataFrame(data)
display(data_pandas)

 

 

Filed Under: Pandas

linspaceとsin関数でのグラフ

2017年12月31日 by 河副 太智 Leave a Comment

1
2
3
4
5
6
7
8
9
10
from scipy import sparse
import numpy as np
%matplotlib inline
 
import matplotlib.pyplot as plt
 
 
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x,y,marker="x")

 

 

Filed Under: グラフ

改行

2017年12月31日 by 河副 太智 Leave a Comment

1
print("NumPy array: \n {}".format(aaa))

 

Filed Under: python3

LSTM時系列解析 全体像

2017年12月31日 by 河副 太智 Leave a Comment

LSTM時系列解析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
import numpy
import matplotlib.pyplot as plt
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
 
#以下にコードを書いてください
# データセットの作成
def create_dataset(dataset, look_back):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return numpy.array(dataX), numpy.array(dataY)
# 乱数設定
numpy.random.seed(7)
# データセットの読み込み 
dataframe = read_csv('nikkei225.csv', usecols=[1], engine='python', skipfooter=3)
dataset = dataframe.values
dataset = dataset.astype('float32')
# 訓練データ、テストデータに分ける
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
 
# データのスケーリング
scaler = MinMaxScaler(feature_range=(0, 1))
scaler_train = scaler.fit(train)
train = scaler_train.transform(train)
test = scaler_train.transform(test)
 
# データの作成
look_back = 10
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# データの整形
trainX = numpy.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = numpy.reshape(testX, (testX.shape[0], testX.shape[1], 1))
# LSTMモデルの作成と学習
model = Sequential()
model.add(LSTM(64, return_sequences=True,input_shape=(look_back, 1)))
model.add(LSTM(32))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=10, batch_size=1, verbose=2)
 
# 予測データの作成
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
 
# スケールしたデータを元に戻す
trainPredict = scaler_train.inverse_transform(trainPredict)
trainY = scaler_train.inverse_transform([trainY])
testPredict = scaler_train.inverse_transform(testPredict)
testY = scaler_train.inverse_transform([testY])
 
# 予測精度の計算
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
 
# プロットのためのデータ整形
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict
# テストデータのプロット
plt.plot(dataframe[round(len(dataset)*0.67):])
plt.plot(testPredictPlot)
plt.show()

結果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Epoch 1/10
44s - loss: 0.0040
Epoch 2/10
44s - loss: 0.0013
Epoch 3/10
43s - loss: 0.0011
Epoch 4/10
44s - loss: 7.8079e-04
Epoch 5/10
44s - loss: 5.9064e-04
Epoch 6/10
44s - loss: 5.5586e-04
Epoch 7/10
43s - loss: 5.2437e-04
Epoch 8/10
43s - loss: 5.4960e-04
Epoch 9/10
43s - loss: 5.3203e-04
Epoch 10/10
44s - loss: 4.9286e-04
Train Score: 270.96 RMSE
Test Score: 144.13 RMSE

 

 

Filed Under: 作成実績, 教師有り, 機械学習

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 28
  • Page 29
  • Page 30
  • Page 31
  • Page 32
  • Interim pages omitted …
  • Page 66
  • Go to Next Page »

Primary Sidebar

カテゴリー

  • AWS
  • Bootstrap
  • Dash
  • Django
  • flask
  • GIT(sourcetree)
  • Plotly/Dash
  • VPS
  • その他tool
  • ブログ
  • プログラミング
    • Bokeh
    • css
    • HoloViews
    • Jupyter
    • Numpy
    • Pandas
    • PosgreSQL
    • Python 基本
    • python3
      • webアプリ
    • python3解説
    • scikit-learn
    • scipy
    • vps
    • Wordpress
    • グラフ
    • コマンド
    • スクレイピング
    • チートシート
    • データクレンジング
    • ブロックチェーン
    • 作成実績
    • 時系列分析
    • 機械学習
      • 分析手法
      • 教師有り
    • 異常値検知
    • 自然言語処理
  • 一太郎
  • 数学
    • sympy
      • 対数関数(log)
      • 累乗根(n乗根)
    • 暗号学

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in