train_xを減らしてmnistでval_accを0.02%伸ばす方法
TL;DR
trainデータで間違えるところだけを学習するとちょっと0.02%伸びたよ
やったこと
mnistでモデルを作っていると、
学習率を落としても、数値が上がりそうで上がらず振幅して悲しいことが多いと思いますが、
train_xを工夫すると若干(0.02%)val_accを向上させられたので、ご紹介。
まず普通にモデル作成
こんなモデルを作りました。
# 乱数固定おまじない import os import numpy as np import random as rn import tensorflow as tf os.environ['PYTHONHASHSEED'] = '0' np.random.seed(0) rn.seed(0) session_conf = tf.ConfigProto( intra_op_parallelism_threads=1, inter_op_parallelism_threads=1 ) from keras import backend as K tf.set_random_seed(0) sess = tf.Session(graph=tf.get_default_graph(), config=session_conf) K.set_session(sess) from keras import datasets import numpy as np ((train_x,train_y_org),(test_x,test_y)) = datasets.mnist.load_data() train_x = (train_x/255).astype("float32").reshape(-1,28,28,1) test_x = (test_x /255).astype("float32").reshape(-1,28,28,1) train_y = np.eye(10)[train_y_org] test_y = np.eye(10)[test_y] from tensorflow.keras.layers import * from tensorflow.keras.models import * from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.losses import categorical_crossentropy cp_cb = ModelCheckpoint(filepath = "model_{epoch:02d}.model") model = Sequential() model.add(Conv2D(filters=32,kernel_size=(3,3),input_shape=(28,28,1),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(Conv2D(filters=32,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(MaxPool2D(pool_size=(2,2),padding="same")) # 28,28 -> 14,14 model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(MaxPool2D(pool_size=(2,2),padding="same")) # 14,14 -> 7,7 model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(MaxPool2D(pool_size=(2,2),padding="same")) # 7,7 -> 4,4 model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(MaxPool2D(pool_size=(2,2),padding="same")) # 4,4 -> 2,2 model.add(Conv2D(filters=128,kernel_size=(2,2),padding="same")) model.add(PReLU()) model.add(Dropout(0.15)) model.add(BatchNormalization()) model.add(MaxPool2D(pool_size=(2,2),padding="same")) # 2,2 -> 1,1 model.add(Flatten()) model.add(Dense(10,activation="softmax")) model.summary() model.compile(optimizer=Adam(lr=0.0001),metrics=['accuracy'],loss="categorical_crossentropy") model_history = model.fit(train_x,train_y,batch_size=16,epochs=64,validation_data=(test_x,test_y),callbacks=[cp_cb])
結果は下記の通り
Train on 60000 samples, validate on 10000 samples Epoch 1/64 60000/60000 [==============================] - 82s 1ms/step - loss: 0.8120 - acc: 0.7451 - val_loss: 0.4310 - val_acc: 0.8647 Epoch 2/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.1514 - acc: 0.9529 - val_loss: 0.0841 - val_acc: 0.9783 Epoch 3/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0911 - acc: 0.9720 - val_loss: 0.0681 - val_acc: 0.9835 Epoch 4/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0661 - acc: 0.9793 - val_loss: 0.0346 - val_acc: 0.9899 Epoch 5/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0537 - acc: 0.9841 - val_loss: 0.0359 - val_acc: 0.9898 Epoch 6/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0438 - acc: 0.9867 - val_loss: 0.0265 - val_acc: 0.9925 Epoch 7/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0396 - acc: 0.9879 - val_loss: 0.0329 - val_acc: 0.9901 Epoch 8/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0336 - acc: 0.9894 - val_loss: 0.0227 - val_acc: 0.9934 Epoch 9/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0305 - acc: 0.9900 - val_loss: 0.0257 - val_acc: 0.9929 Epoch 10/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0268 - acc: 0.9917 - val_loss: 0.0291 - val_acc: 0.9909 Epoch 11/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0250 - acc: 0.9920 - val_loss: 0.0221 - val_acc: 0.9934 Epoch 12/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0236 - acc: 0.9927 - val_loss: 0.0297 - val_acc: 0.9915 Epoch 13/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0215 - acc: 0.9933 - val_loss: 0.0189 - val_acc: 0.9940 Epoch 14/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0191 - acc: 0.9939 - val_loss: 0.0210 - val_acc: 0.9940 Epoch 15/64 60000/60000 [==============================] - 79s 1ms/step - loss: 0.0180 - acc: 0.9944 - val_loss: 0.0188 - val_acc: 0.9948 Epoch 16/64 60000/60000 [==============================] - 79s 1ms/step - loss: 0.0165 - acc: 0.9950 - val_loss: 0.0161 - val_acc: 0.9953 Epoch 17/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0159 - acc: 0.9949 - val_loss: 0.0202 - val_acc: 0.9936 Epoch 18/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0141 - acc: 0.9956 - val_loss: 0.0233 - val_acc: 0.9925 Epoch 19/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0143 - acc: 0.9954 - val_loss: 0.0187 - val_acc: 0.9945 Epoch 20/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0133 - acc: 0.9957 - val_loss: 0.0221 - val_acc: 0.9938 Epoch 21/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0131 - acc: 0.9958 - val_loss: 0.0178 - val_acc: 0.9950 Epoch 22/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0124 - acc: 0.9961 - val_loss: 0.0171 - val_acc: 0.9950 Epoch 23/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0121 - acc: 0.9961 - val_loss: 0.0169 - val_acc: 0.9950 Epoch 24/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0099 - acc: 0.9967 - val_loss: 0.0207 - val_acc: 0.9942 Epoch 25/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0103 - acc: 0.9967 - val_loss: 0.0172 - val_acc: 0.9953 Epoch 26/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0097 - acc: 0.9970 - val_loss: 0.0189 - val_acc: 0.9946 Epoch 27/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0095 - acc: 0.9970 - val_loss: 0.0182 - val_acc: 0.9944 Epoch 28/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0084 - acc: 0.9972 - val_loss: 0.0214 - val_acc: 0.9942 Epoch 29/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0087 - acc: 0.9972 - val_loss: 0.0193 - val_acc: 0.9945 Epoch 30/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0077 - acc: 0.9976 - val_loss: 0.0173 - val_acc: 0.9960 Epoch 31/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0089 - acc: 0.9973 - val_loss: 0.0187 - val_acc: 0.9948 Epoch 32/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0064 - acc: 0.9979 - val_loss: 0.0204 - val_acc: 0.9948 Epoch 33/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0077 - acc: 0.9975 - val_loss: 0.0205 - val_acc: 0.9949 Epoch 34/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0064 - acc: 0.9979 - val_loss: 0.0173 - val_acc: 0.9953 Epoch 35/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0068 - acc: 0.9976 - val_loss: 0.0204 - val_acc: 0.9952 Epoch 36/64 60000/60000 [==============================] - 79s 1ms/step - loss: 0.0074 - acc: 0.9976 - val_loss: 0.0183 - val_acc: 0.9952 Epoch 37/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0053 - acc: 0.9983 - val_loss: 0.0240 - val_acc: 0.9947 Epoch 38/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0064 - acc: 0.9979 - val_loss: 0.0190 - val_acc: 0.9951 Epoch 39/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0064 - acc: 0.9979 - val_loss: 0.0232 - val_acc: 0.9943 Epoch 40/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0056 - acc: 0.9981 - val_loss: 0.0177 - val_acc: 0.9956 Epoch 41/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0059 - acc: 0.9981 - val_loss: 0.0197 - val_acc: 0.9956 Epoch 42/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0062 - acc: 0.9980 - val_loss: 0.0210 - val_acc: 0.9951 Epoch 43/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0050 - acc: 0.9982 - val_loss: 0.0210 - val_acc: 0.9951 Epoch 44/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0047 - acc: 0.9984 - val_loss: 0.0196 - val_acc: 0.9953 Epoch 45/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0051 - acc: 0.9982 - val_loss: 0.0230 - val_acc: 0.9945 Epoch 46/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0056 - acc: 0.9983 - val_loss: 0.0208 - val_acc: 0.9950 Epoch 47/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0050 - acc: 0.9983 - val_loss: 0.0234 - val_acc: 0.9948 Epoch 48/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0050 - acc: 0.9983 - val_loss: 0.0221 - val_acc: 0.9947 Epoch 49/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0045 - acc: 0.9986 - val_loss: 0.0223 - val_acc: 0.9949 Epoch 50/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0040 - acc: 0.9987 - val_loss: 0.0243 - val_acc: 0.9944 Epoch 51/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0038 - acc: 0.9989 - val_loss: 0.0189 - val_acc: 0.9954 Epoch 52/64 60000/60000 [==============================] - 79s 1ms/step - loss: 0.0047 - acc: 0.9986 - val_loss: 0.0204 - val_acc: 0.9953 Epoch 53/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0042 - acc: 0.9985 - val_loss: 0.0171 - val_acc: 0.9961 Epoch 54/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0044 - acc: 0.9986 - val_loss: 0.0223 - val_acc: 0.9951 Epoch 55/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0043 - acc: 0.9986 - val_loss: 0.0194 - val_acc: 0.9951 Epoch 56/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0041 - acc: 0.9986 - val_loss: 0.0200 - val_acc: 0.9947 Epoch 57/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0037 - acc: 0.9988 - val_loss: 0.0203 - val_acc: 0.9953 Epoch 58/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0036 - acc: 0.9989 - val_loss: 0.0182 - val_acc: 0.9955 Epoch 59/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0041 - acc: 0.9988 - val_loss: 0.0188 - val_acc: 0.9957 Epoch 60/64 60000/60000 [==============================] - 77s 1ms/step - loss: 0.0041 - acc: 0.9985 - val_loss: 0.0240 - val_acc: 0.9941 Epoch 61/64 60000/60000 [==============================] - 79s 1ms/step - loss: 0.0037 - acc: 0.9989 - val_loss: 0.0214 - val_acc: 0.9947 Epoch 62/64 60000/60000 [==============================] - 79s 1ms/step - loss: 0.0030 - acc: 0.9990 - val_loss: 0.0236 - val_acc: 0.9948 Epoch 63/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0037 - acc: 0.9989 - val_loss: 0.0239 - val_acc: 0.9948 Epoch 64/64 60000/60000 [==============================] - 78s 1ms/step - loss: 0.0034 - acc: 0.9990 - val_loss: 0.0204 - val_acc: 0.9952
epoch53でval_accが99.61%を記録しました。
本当にこのモデルで99.61%を記録するか確認しておきましょう。
callbackにモデルを保存する関数を入れておいたので、 各epoch毎にその時の重みとバイアスを保存してくれています。
load_modelで呼び出して確認です。
model = load_model("./model_53.model")
model.evaluate(test_x,test_y)
10000/10000 [==============================] - 2s 187us/step [0.017076379576059116, 0.9961]
ちゃんとval_accが0.9961出してくれますね。
た だ し 今 回 は こ の モ デ ル は 使 い ま せ ん 。
理由としては、trainデータのaccがもうすでに100%だったからです。
なので恣意的にepoch30のval_acc99.6%のモデルを使います。
model = load_model("./model_30.model")
データの取捨選択
さて、このモデル、現状trainとtestに対してどれくらいの能力を持っているのか、 今一度確認します。
# 訓練データ print(np.sum(np.argmax(model.predict(train_x),axis=1) == train_y_org)) # テストデータ print(np.sum(np.argmax(model.predict(train_x),axis=1) == np.argmax(test_y,axis=1)))
結果は以下の通り
59976 9960
trainが60000,testが10000なので、 trainは24個間違い、testは40個間違えてますね。
さて、ここでtrainの24個に注目します。
もう合ってるものを学習してあまり意味はないし、むしろ過学習するだけで悪影響を及ぼすと考えられます。
なので間違えているものだけを、学習率を1/10に下げてちょろちょろと重みとバイアスを更新すると少なくとも学習データに対しては適合します。
それがテストデータに対してどうなるのか、が今回のお話。
なのでこんなコードをこの後に実行しました。
model.compile(optimizer=Adam(lr=0.00001),metrics=['accuracy'],loss="categorical_crossentropy") model_history_2 = [] for i in range(32): choice = np.invert((np.argmax(model.predict(train_x),axis=1) == train_y_org)) model_history_2.append(model.fit(train_x[choice],train_y[choice],batch_size=16,epochs=1,validation_data=(test_x,test_y)))
肝は4行目でtrainデータの中で間違えたものだけを取り出して、 それを5行目で学習するときに突っ込むところですね。
さて、その結果はこんな感じです。
Train on 24 samples, validate on 10000 samples Epoch 1/1 24/24 [==============================] - 7s 280ms/step - loss: 2.9693 - acc: 0.3750 - val_loss: 0.0173 - val_acc: 0.9960 Train on 23 samples, validate on 10000 samples Epoch 1/1 23/23 [==============================] - 2s 104ms/step - loss: 2.1650 - acc: 0.3913 - val_loss: 0.0173 - val_acc: 0.9960 Train on 21 samples, validate on 10000 samples Epoch 1/1 21/21 [==============================] - 2s 113ms/step - loss: 2.7324 - acc: 0.3333 - val_loss: 0.0173 - val_acc: 0.9960 Train on 21 samples, validate on 10000 samples Epoch 1/1 21/21 [==============================] - 2s 112ms/step - loss: 2.9146 - acc: 0.2381 - val_loss: 0.0173 - val_acc: 0.9960 Train on 20 samples, validate on 10000 samples Epoch 1/1 20/20 [==============================] - 2s 119ms/step - loss: 3.5161 - acc: 0.2500 - val_loss: 0.0174 - val_acc: 0.9962 Train on 17 samples, validate on 10000 samples Epoch 1/1 17/17 [==============================] - 2s 139ms/step - loss: 3.3470 - acc: 0.2941 - val_loss: 0.0174 - val_acc: 0.9961 Train on 17 samples, validate on 10000 samples Epoch 1/1 17/17 [==============================] - 2s 139ms/step - loss: 2.3149 - acc: 0.4706 - val_loss: 0.0175 - val_acc: 0.9958 Train on 16 samples, validate on 10000 samples Epoch 1/1 16/16 [==============================] - 2s 147ms/step - loss: 2.7394 - acc: 0.3750 - val_loss: 0.0175 - val_acc: 0.9959 Train on 16 samples, validate on 10000 samples Epoch 1/1 16/16 [==============================] - 2s 146ms/step - loss: 2.2038 - acc: 0.3125 - val_loss: 0.0175 - val_acc: 0.9960 Train on 16 samples, validate on 10000 samples Epoch 1/1 16/16 [==============================] - 2s 146ms/step - loss: 1.8772 - acc: 0.3750 - val_loss: 0.0175 - val_acc: 0.9960 Train on 15 samples, validate on 10000 samples Epoch 1/1 15/15 [==============================] - 2s 158ms/step - loss: 2.2117 - acc: 0.3333 - val_loss: 0.0175 - val_acc: 0.9959 Train on 15 samples, validate on 10000 samples Epoch 1/1 15/15 [==============================] - 2s 158ms/step - loss: 1.7083 - acc: 0.4000 - val_loss: 0.0175 - val_acc: 0.9959 Train on 15 samples, validate on 10000 samples Epoch 1/1 15/15 [==============================] - 2s 157ms/step - loss: 2.1227 - acc: 0.4000 - val_loss: 0.0175 - val_acc: 0.9959 Train on 14 samples, validate on 10000 samples Epoch 1/1 14/14 [==============================] - 2s 171ms/step - loss: 2.7587 - acc: 0.4286 - val_loss: 0.0175 - val_acc: 0.9960 Train on 14 samples, validate on 10000 samples Epoch 1/1 14/14 [==============================] - 2s 169ms/step - loss: 3.1257 - acc: 0.2857 - val_loss: 0.0175 - val_acc: 0.9959 Train on 14 samples, validate on 10000 samples Epoch 1/1 14/14 [==============================] - 2s 166ms/step - loss: 2.8811 - acc: 0.2857 - val_loss: 0.0176 - val_acc: 0.9959 Train on 14 samples, validate on 10000 samples Epoch 1/1 14/14 [==============================] - 2s 166ms/step - loss: 2.2550 - acc: 0.3571 - val_loss: 0.0176 - val_acc: 0.9960 Train on 13 samples, validate on 10000 samples Epoch 1/1 13/13 [==============================] - 2s 181ms/step - loss: 2.1959 - acc: 0.4615 - val_loss: 0.0176 - val_acc: 0.9960 Train on 12 samples, validate on 10000 samples Epoch 1/1 12/12 [==============================] - 2s 195ms/step - loss: 3.4894 - acc: 0.0833 - val_loss: 0.0176 - val_acc: 0.9960 Train on 12 samples, validate on 10000 samples Epoch 1/1 12/12 [==============================] - 2s 198ms/step - loss: 3.0016 - acc: 0.3333 - val_loss: 0.0176 - val_acc: 0.9960 Train on 12 samples, validate on 10000 samples Epoch 1/1 12/12 [==============================] - 2s 199ms/step - loss: 4.0224 - acc: 0.1667 - val_loss: 0.0176 - val_acc: 0.9960 Train on 12 samples, validate on 10000 samples Epoch 1/1 12/12 [==============================] - 2s 197ms/step - loss: 3.4493 - acc: 0.3333 - val_loss: 0.0176 - val_acc: 0.9960 Train on 11 samples, validate on 10000 samples Epoch 1/1 11/11 [==============================] - 2s 218ms/step - loss: 3.6480 - acc: 0.2727 - val_loss: 0.0176 - val_acc: 0.9960 Train on 9 samples, validate on 10000 samples Epoch 1/1 9/9 [==============================] - 2s 266ms/step - loss: 2.7602 - acc: 0.3333 - val_loss: 0.0176 - val_acc: 0.9960 Train on 9 samples, validate on 10000 samples Epoch 1/1 9/9 [==============================] - 2s 264ms/step - loss: 3.2277 - acc: 0.3333 - val_loss: 0.0176 - val_acc: 0.9960 Train on 9 samples, validate on 10000 samples Epoch 1/1 9/9 [==============================] - 2s 258ms/step - loss: 1.9044 - acc: 0.4444 - val_loss: 0.0177 - val_acc: 0.9960 Train on 9 samples, validate on 10000 samples Epoch 1/1 9/9 [==============================] - 2s 257ms/step - loss: 3.7250 - acc: 0.1111 - val_loss: 0.0177 - val_acc: 0.9960 Train on 9 samples, validate on 10000 samples Epoch 1/1 9/9 [==============================] - 2s 256ms/step - loss: 3.6916 - acc: 0.2222 - val_loss: 0.0177 - val_acc: 0.9959 Train on 10 samples, validate on 10000 samples Epoch 1/1 10/10 [==============================] - 2s 234ms/step - loss: 3.0128 - acc: 0.4000 - val_loss: 0.0177 - val_acc: 0.9959 Train on 10 samples, validate on 10000 samples Epoch 1/1 10/10 [==============================] - 2s 231ms/step - loss: 2.5596 - acc: 0.2000 - val_loss: 0.0177 - val_acc: 0.9959 Train on 10 samples, validate on 10000 samples Epoch 1/1 10/10 [==============================] - 2s 233ms/step - loss: 3.6156 - acc: 0.2000 - val_loss: 0.0177 - val_acc: 0.9959
5ループ目に99.62%を達成してくれてます。その後は過学習したのかどんどんval_lossもval_accも落ちていきます。
ここで言いたいのは53epoch全量回し切ったのに対し、
30epoch程度のモデルに対して20個程度のデータで5回回すだけでval_accが超えたことですね。
これは計算量の効率がだいぶよいのではないでしょうか。
そんなお話。
現場で使える! TensorFlow開発入門 Kerasによる深層学習モデル構築手法 (AI & TECHNOLOGY)
- 作者: 太田満久,須藤広大,黒澤匠雅,小田大輔
- 出版社/メーカー: 翔泳社
- 発売日: 2018/04/19
- メディア: 単行本(ソフトカバー)
- この商品を含むブログ (2件) を見る