セキュリティ皆無
インストール
$ pip install jupyter
起動
$ jupyter notebook --NotebookApp.token='xxx' --ip=0.0.0.0 --port=9999
あとはサーバ側にポート開ければおk
セキュリティ皆無
$ pip install jupyter
$ jupyter notebook --NotebookApp.token='xxx' --ip=0.0.0.0 --port=9999
あとはサーバ側にポート開ければおk
映画ブログあるけど、ここに書く。
おもしろかったあああああああ!!!!
初代を感じさせるスタート。初代のリスペクトを感じます。今みたらモノクロだし特撮っぽさがあるからミニチュアな造形でリアリティないけど、当時の人たちはこのシン・ゴジラと同じですごく恐怖を感じたんだろうなと思う。
BD持っているのと&絶対カットされると思っていて、見るの躊躇していました。
で、やっぱり見てしまいやっぱり面白かった!!!
カットはEDと小出(ryがカット!が、まぁ話が飛ぶとかなかったのでよかった。
やっぱりラストの尻尾のシーンは何度見ても不気味ですね。
色々考察あるけど、第5形態の考察が個人的にはありかなーと思っています。
「内閣総辞職ビーム」がトレンド入りしたのも面白かった! 見てよかった。
映画で2回見たけど、何度みても面白いなぁ。
2020年までにハリウッドでキングコングとバトルする英語ができるらしいのでそれも楽しみ。
できればその合間に日本ももう一度作ってほしいな。
続かなくてもいいけど、日本のゴジラのこれからに期待しています!
仕事でElastic Searchを使うことになりそうなので、読めということで読んでいる。
データ分析基盤構築入門[Fluentd、Elasticsearch、Kibanaによるログ収集と可視化]
以下自分の勉強ログ
Elasticsearchの大まかな説明
Elasticsearchの基本的な使い方を説明している Dockerでコンテナ作って遊びたいと思っているけど、ローカルマシンのDockerが動かないハプニングで、内容だけ読み込み中...
この辺で、dockerがうまく動くようになった.
$ docker run -it --rm -p 9200:9200 -p 9300:9300 \ -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" \ docker.elastic.co/elasticsearch/elasticsearch:5.1.1
起動したっぽい
$ $ curl -XGET http://127.0.0.1:9200/ | jq { "error": { "root_cause": [ { "type": "security_exception", "reason": "missing authentication token for REST request [/]", "header": { "WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\"" } } ], "type": "security_exception", "reason": "missing authentication token for REST request [/]", "header": { "WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\"" } }, "status": 401 }
・・・・?
Elastic StackのX-Packを試す(インストール編) | Developers.IO
なんか入れたDockerのコンテナにX-Packってのが入っているっぽい. なのでベーシック認証が必須とのこと
$ curl -u elastic 'localhost:9200?pretty'
ユーザー elastic
で、アクセスして、パスワードが changeme
を入力したら結果が返ってきます。
毎回やらないといけないの? めんどくせえええええ
無効にするオプションがあった。。。よかった
$ docker run -it --rm -p 9200:9200 -p 9300:9300 \ -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" -e "xpack.security.enabled=false" \ docker.elastic.co/elasticsearch/elasticsearch:5.1.1
$ curl -XGET http://127.0.0.1:9200/ | jq { "name": "fS_DHWK", "cluster_name": "docker-cluster", "cluster_uuid": "2mg8kb44RGiFDpWLWyiFJw", "version": { "number": "5.1.1", "build_hash": "5395e21", "build_date": "2016-12-06T12:36:15.409Z", "build_snapshot": false, "lucene_version": "6.3.0" }, "tagline": "You Know, for Search" }
おk
$ curl -XGET http://localhost:9200/_cluster/health?pretty | jq { "cluster_name": "docker-cluster", "status": "yellow", "timed_out": false, "number_of_nodes": 1, "number_of_data_nodes": 1, "active_primary_shards": 2, "active_shards": 2, "relocating_shards": 0, "initializing_shards": 0, "unassigned_shards": 2, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 50 }
$ curl -XPUT http://localhost:9200/test_index {"acknowledged":true,"shards_acknowledged":true}
jq
コマンド使うの忘れた
慌てて2回実行したらエラーになった
{ "error": { "root_cause": [ { "type": "index_already_exists_exception", "reason": "index [test_index/CvbPEUedT8aJbWciMdaUxQ] already exists", "index_uuid": "CvbPEUedT8aJbWciMdaUxQ", "index": "test_index" } ], "type": "index_already_exists_exception", "reason": "index [test_index/CvbPEUedT8aJbWciMdaUxQ] already exists", "index_uuid": "CvbPEUedT8aJbWciMdaUxQ", "index": "test_index" }, "status": 400 }
curl -XDELETE http://localhost:9200/test_index {"acknowledged":true}
$ curl -XPUT http://localhost:9200/test_index/apache_log/1 -d ' { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } ' | jq { "_index": "test_index", "_type": "apache_log", "_id": "1", "_version": 1, "result": "created", "_shards": { "total": 2, "successful": 1, "failed": 0 }, "created": true }
URLに /インデックス名/タイプ名/ID
を指定してデータを保存することができる。
データ削除もさっきと一緒。
$ curl -XDELETE http://localhost:9200/test_index/aapche_log/1
1件1件登録は面倒でパフォーマンスも悪いので、Bulk APIというのを使いましょう。
NDJSONって形式で投げてほしいとのこと(はじめてきいた)
全件検索
$ curl -XGET http://localhost:9200/test_index/_search -d ' > { > "query": { > "match_all": {} > } > }' | jq { "took": 60, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 1, "max_score": 1, "hits": [ { "_index": "test_index", "_type": "apache_log", "_id": "1", "_score": 1, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } } ] } }
検索のパラメータ
レスポンスの項目
よくわからんので試してみる。(その前にデータを3つほど追加した)
$ curl -XGET http://localhost:9200/test_index/_search -d ' { "from": 2 }' | jq { "took": 1, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 3, "max_score": 1, "hits": [ { "_index": "test_index", "_type": "apache_log", "_id": "3", "_score": 1, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } } ] } } $ curl -XGET http://localhost:9200/test_index/_search -d ' { "query": { "match_all": {} }, "size": 1 }' | jq { "took": 2, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 3, "max_score": 1, "hits": [ { "_index": "test_index", "_type": "apache_log", "_id": "2", "_score": 1, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } } ] } } # sortはうまくいかなかった # fieldがtextなのがダメっぽい $ curl -XGET http://localhost:9200/test_index/_search -d ' { "query": {"match_all": {}}, "sort": [{"bytes": "desc"}] }' | jq { "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [bytes] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory." } ], "type": "search_phase_execution_exception", "reason": "all shards failed", "phase": "query", "grouped": true, "failed_shards": [ { "shard": 0, "index": "test_index", "node": "hRHOai-CSA2RDZKHGqtsrg", "reason": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [bytes] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory." } } ], "caused_by": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [bytes] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory." } }, "status": 400 } # 後半でfieldは複数付けられることでここを思い出す # textでだめならkeywordでsortしてみる $ curl -XGET http://localhost:9200/test_index/_search -d ' {"sort": [{"bytes.keyword": "desc"}]}' | jq { "took": 4, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 5, "max_score": null, "hits": [ { "_index": "test_index", "_type": "apache_log", "_id": "5", "_score": null, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance/hogehogehoghoehoghaoshdifahsidhfiashdifahsidfhiashdfihaisdhfiahsdifhaisdfkakwlejfoawejifjaisjdflkajefjaiohgoiahiehiwhfkasdofaoiefalksjdflkasjdkfalskhfkahsdfhasdhfasdfhasdfhasdfhashdfhasidfhaishdfiahsidfhiashfihasidfhias/category/finance/hogehogehoghoehoghaoshdifahsidhfiashdifahsidfhiashdfihaisdhfiahsdifhaisdfkakwlejfoawejifjaisjdflkajefjaiohgoiahiehiwhfkasdofaoiefalksjdflkasjdkfalskhfkahsdfhasdhfasdfhasdfhasdfhashdfhasidfhaishdfiahsidfhiashfihasidfhiasasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdf", "httpversion": "1.1", "response": "200", "bytes": "51" }, "sort": [ "51" ] }, { "_index": "test_index", "_type": "apache_log", "_id": "2", "_score": null, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" }, "sort": [ "51" ] }, { "_index": "test_index", "_type": "apache_log", "_id": "4", "_score": null, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance/hogehogehoghoehoghaoshdifahsidhfiashdifahsidfhiashdfihaisdhfiahsdifhaisdfkakwlejfoawejifjaisjdflkajefjaiohgoiahiehiwhfkasdofaoiefalksjdflkasjdkfalskhfkahsdfhasdhfasdfhasdfhasdfhashdfhasidfhaishdfiahsidfhiashfihasidfhias/category/finance/hogehogehoghoehoghaoshdifahsidhfiashdifahsidfhiashdfihaisdhfiahsdifhaisdfkakwlejfoawejifjaisjdflkajefjaiohgoiahiehiwhfkasdofaoiefalksjdflkasjdkfalskhfkahsdfhasdhfasdfhasdfhasdfhashdfhasidfhaishdfiahsidfhiashfihasidfhiasasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdf", "httpversion": "1.1", "response": "200", "bytes": "51" }, "sort": [ "51" ] }, { "_index": "test_index", "_type": "apache_log", "_id": "1", "_score": null, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" }, "sort": [ "51" ] }, { "_index": "test_index", "_type": "apache_log", "_id": "3", "_score": null, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" }, "sort": [ "51" ] } ] } } # できたーーーーーー!!!
特殊なクエリ式を利用して複雑なクエリが記述できるとのこと(?
$ curl -XGET http://localhost:9200/test_index/_search -d ' { "query": { "query_string": { "query": "request:category AND response:200" } } }' | jq { "took": 26, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 3, "max_score": 0.5457982, "hits": [ { "_index": "test_index", "_type": "apache_log", "_id": "2", "_score": 0.5457982, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } }, { "_index": "test_index", "_type": "apache_log", "_id": "1", "_score": 0.5457982, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } }, { "_index": "test_index", "_type": "apache_log", "_id": "3", "_score": 0.5457982, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } } ] } }
集約してくれるらしい
$ curl -XGET http://localhost:9200/test_index/_search -d ' { "query": { "match_all": {} }, "aggs": { "request_aggs": { "terms": { "field": "request", "size": 10 } } } }' | jq { "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [request] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory." } ], "type": "search_phase_execution_exception", "reason": "all shards failed", "phase": "query", "grouped": true, "failed_shards": [ { "shard": 0, "index": "test_index", "node": "hRHOai-CSA2RDZKHGqtsrg", "reason": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [request] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory." } } ], "caused_by": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [request] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory." } }, "status": 400 }
エラー!?
Elasticsearchは文字列のフィールドに対してはデフォルトで text
というフィールド型でデータを登録する。このフィールドはAggregationができない。なんだと・・・
よくわからんけど、 request.keyword
に変更したら集約できるとのこと.
$ curl -XGET http://localhost:9200/test_index/_search -d ' { "query": { "match_all": {} }, "aggs": { "request_aggs": { "terms": { "field": "request.keyword", "size": 10 } } } }' | jq { "took": 3, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 3, "max_score": 1, "hits": [ { "_index": "test_index", "_type": "apache_log", "_id": "2", "_score": 1, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } }, { "_index": "test_index", "_type": "apache_log", "_id": "1", "_score": 1, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } }, { "_index": "test_index", "_type": "apache_log", "_id": "3", "_score": 1, "_source": { "host": "localhost", "timestamp": "06/May/2014:06:11:48 + 0000", "verb": "GET", "request": "/category/finance", "httpversion": "1.1", "response": "200", "bytes": "51" } } ] }, "aggregations": { "request_aggs": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "/category/finance", "doc_count": 3 } ] } } }
Aggregationのタイプ
登録するときにデフォルトで色々やってくれるらしいけど、でもちゃんと定義しないと検索するとき辛いよってことだから、フィールドちゃんと定義してあげようぜってのがこれ。
まずは現在のフィールド情報を確認
$ curl -XGET http://localhost:9200/test_index/_mapping | jq { "test_index": { "mappings": { "apache_log": { "properties": { "bytes": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "host": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "httpversion": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "request": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "response": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "timestamp": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "verb": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } } } } }
文字列データはデフォルトで、textとkeywordの2つのフィールド型が生成される。(!? なるほど。。。。。こういう感じにデータを格納するのね。。。
なんかこう。。。RDBしか使ってこなかった人間からしたらフラットすぎて馴染みづらい。。。
$ curl -XGET http://localhost:9200/test_index/_search -d ' { "query": { "query_string": { "query": "request:finance" } }, "aggs": { "request_aggs": { "terms": { "field": "request.keyword", "size": 10 } } } }' | jq { # ...省略 "aggregations": { "request_aggs": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "/category/finance", "doc_count": 3 } ] } } }
はえー。なるほど
なんか、ignore_above
って設定があるからそれをマッピングで指定して変更することもできるっぽい
$ curl -XPUT http://localhost:9200/test_index2 -d ' { "mappings": { "apache_log": { "properties": { "request": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 1000 } } } } } } }' | jq $ curl -XGET http://localhost:9200/test_index2 | jq { "test_index2": { "aliases": {}, "mappings": { "apache_log": { "properties": { "request": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 1000 } } } } } }, "settings": { "index": { "creation_date": "1510229419825", "number_of_shards": "5", "number_of_replicas": "1", "uuid": "6T3UPgbVR6OIZ0i6VudoWQ", "version": { "created": "5010199" }, "provided_name": "test_index2" } } } }
ふむ
ログデータを保存するときはインデックスを日ごとに作成するのがオススメ。なんだけど、毎回マッピングを指定してインデックスを作成するのはだるい。
なので、マッピングをテンプレート化して、条件にマッチしたらそのテンプレートを使ってマッピングするってことができる。
$ curl -XPUT http://localhost:9200/_template/apache_log_template -d ' { "template": "test_*", "mappings": { "apache_log": { "properties": { "request": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 1000 } } } } } } } ' | jq { "acknowledged": true }
登録できた・・・?
試してみる。
$ curl -XPUT http://localhost:9200/test_index5 | jq { "acknowledged": true, "shards_acknowledged": true } $ curl -XGET http://localhost:9200/test_index5 | jq { "test_index5": { "aliases": {}, "mappings": { "apache_log": { "properties": { "request": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 1000 } } } } } }, "settings": { "index": { "creation_date": "1510230084228", "number_of_shards": "5", "number_of_replicas": "1", "uuid": "iIY3umCBQDmQBD_6M_kUmQ", "version": { "created": "5010199" }, "provided_name": "test_index5" } } } }
データ登録のときに、インデックスの登録の簡単な前処理が行える。
$ curl -XPUT http://localhost:9200/_ingest/pipeline/test_pipeline -d ' { "description": "parse number and clientip using grok", "processors": [ { "grok": { "field": "text", "patterns": ["%{NUMBER:duration} %{IP:client}"] }, "remove": { "field": "text" } } ] }' | jq { "acknowledged": true } $ curl -XGET http://localhost:9200/_ingest/pipeline/ | jq { "xpack_monitoring_2": { "description": "2: This is a placeholder pipeline for Monitoring API version 2 so that future versions may fix breaking changes.", "processors": [] }, "test_pipeline": { "description": "parse number and clientip using grok", "processors": [ { "grok": { "field": "text", "patterns": [ "%{NUMBER:duration} %{IP:client}" ] }, "remove": { "field": "text" } } ] } } # 動作確認 $ curl -XPOST http://localhost:9200/_ingest/pipeline/test_pipeline/_simulate -d ' { "docs": [ { "_source": { "text": "3.44 55.3.244.1" } } ] } ' | jq { "docs": [ { "doc": { "_type": "_type", "_index": "_index", "_id": "_id", "_source": { "duration": "3.44", "client": "55.3.244.1" }, "_ingest": { "timestamp": "2017-11-09T12:30:48.371+0000" } } } ] } # 登録はこんな感じ $ curl -XPUT http://localhost:9200/sample_index/sample/1?pipeline=test_pipeline -d ' { "text": "3.44 55.3.244.1" } ' | jq { "_index": "sample_index", "_type": "sample", "_id": "1", "_version": 1, "result": "created", "_shards": { "total": 2, "successful": 1, "failed": 0 }, "created": true } $ curl -XGET http://localhost:9200/sample_index/sample/1 | jq { "_index": "sample_index", "_type": "sample", "_id": "1", "_version": 1, "found": true, "_source": { "duration": "3.44", "client": "55.3.244.1" } }
今日は朝からKeras rlをいじっていた。まだおわらん。。。。
TypeError: 'RingBuffer' object does not support item assignment
何このエラー...つらい
特殊メソッド名 - Dive Into Python 3 日本語版
特殊メソッドを実装すればよいとのこと。確かにこのRingBuffer
というクラスにはセットするメソッドがない。
class _RingBuffer(RingBuffer): def __setitem__(self, idx, v): if idx < 0 or idx >= self.length: raise KeyError() self.data[idx] = v
Pythonのこういうところいいよなー
タイトル長い
自分用メモです。参考にはいいと思いますが、この手順でやらないでください。
というのも、dockerのコンテナ上ではGPUを認識してくれますが、何故かTensorflowがhostでGPUを認識してくれないのです。。。正確にはシンボリックエラーが発生しているっぽい。
今調べているけど、それまでは自分用ってことで。メモ。
awsのp2インスタンスを立ち上げている(高い...) んで、ubuntuを選択しました。
$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
$ mkdir work $ cd work/
↑からダウンロードリンクを取得してダウンロードします。
$ wget ${ANACONDA_URL} $ bash Anaconda3-5.0.1-Linux-x86_64.sh $ source ~/.bashrc
手順はこちらから
$ sudo usermod -aG docker $USER
なんでcudaが必要なのかわからないけど、とりあえずinstall.
$ wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub $ cat 7fa2af80.pub | sudo apt-key add - $ wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.0.176-1_amd64.deb $ sudo dpkg -i cuda-repo-ubuntu1604_9.0.176-1_amd64.deb $ sudo apt update $ sudo apt install linux-generic $ sudo apt install cuda cuda-drivers # 再起動が必要なので注意(利用するときは'#'を外してね) $ # sudo reboot $ sudo apt remove linux-virtual $ sudo apt autoremove $ rm 7fa2af80.pub cuda-repo-ubuntu1604_9.0.176-1_amd64.deb
$ vim ~/.bashrc
vimを使って↓の内容を書き込み
export PATH="/usr/local/cuda-9.0/bin:$PATH" export LD_LIBRARY_PATH="/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH"
$ source ~/.bashrc # installの確認 $ nvidia-smi
$ sudo apt install nvidia-modprobe $ wget https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb $ sudo dpkg -i nvidia-docker_1.0.1-1_amd64.deb # docker上でGPUが認識しているか確認 $ nvidia-docker run --rm nvidia/cuda nvidia-smi
$ pip install nvidia-docker-compose
$ pip install tensorflow # 何故かhost側のgpuを認識しないため、cpu版?をinstall