ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Timestamp
What features of Kubernetes do you enjoy using to deploy and operate applications?
What difficulties do you have deploying and operating applications in Kubernetes?
What 3rd party tools do you use to deploy and operate applications in Kubernetes?
Why do you use those 3rd party tools?
What is your development workflow? For example, do you need to be able to test/deploy locally?
How do you incorporate CI/CD for your Kubernetes applications?
What role does version control play in managing Kubernetes applications?
Within the scope of a single app, what type of resources do you need to deploy?
What are your preferred methods of debugging an application's infrastructure? (e.g. "kubectl logs")
If you go directly to the node, what do you do or look for?
What debugging tools are missing from Kubernetes and kubectl?
How do you manage secrets for applications deployed in Kubernetes?
Do you find the default Kubernetes scheduler sufficient for your applications?
How would you like to see the experience of using replication controllers made easier?
Is auto-scaling a desired feature?
How do you use Namespaces? For example, one namespace per application, per service, or one for an "environment" like production or staging.
2
6/15/2016 9:57:12automatic scheduling and orchestrationupgrading kubernetesansiblesupported by red hatCI/CDOpenShiftcentral
database, web application, rest web services, instant app (opensource)
remote debugginglogs, more logs
interrupting container before they crash, or before the entrypoint exits. simply with a debug flag that will wrap the original entrypoint, thus allowing the container to not exit, and allowing devops to go inside and watch for stuf
secretsyesnone
yes, would be nice to have other metrics for scaling , livenessProbe => loadnessProbe thay may return 0=>100 integer
project per user, namespace per environment,
3
6/15/2016 10:04:07DeploymentLack of webhooksCircleCIEasy to configureYesTest and deployTriggers CIDocker imageKubectl logs, describe rcHealth
Support log severity and meta tags with gce fluentd adapter
Kubectl secretYesNice to haveStaging, production
4
6/15/2016 10:14:55
Kubectl & Deployment API Objects. Kubectl because it controls all interaction with the cluster, and Deployment Objects because they are the defacto API resource to define apps
Logs & error descriptions could be better - i.e. when testing a Docker image in k8s, if the container itself is not properly starting or is missing some sort of dependency, the logs & errors reported are a bit ambiguous at times as observed if you've ever had a Pod stuck in the ContainerCreating/Terminating loop
None at the moment. Kubectl is only entry point into the system
n/a
1) Create base Docker image for app
2) Define app in Deployment API object
3) Use ci/cd pipeline to embed source code into base Docker image, creating a container image for testing on k8s
4) In this pipeline, the testable image is run in a development namespace on k8s to ensure it works in the cluster as expected. Similarly, there are stage & prod namespaces for each app that the Pod gets deployed into along the way.
5) If tests pass in the development namespace, changes are pushed to master
6) Git push webhook triggers CI tests & creates official Docker image for app
7) The new image gets deployed to the staging namespace for testing automatically on successful merge's with master
8) If rollout to stage is successful, manually deploy to production when ready
It tests the source, builds & pushes the Docker image for the app, and can be used to issue deployments to specific environment targets based on intention
Its an integral part of the development process, and each Docker image created by the CI/CD pipeline tags the Deployment API object of the app with a revision/commit label for informational insight into what version of the code is running
Deployment, Services (if outside access required), ConfigMaps, Secrets (depending on usecase), PVC + PV's
kubectl logs, kubectl describe, kubectl exec
Depends on issue, but primarily the logs for the Node's kubelet and the Master's controller-manager, as they tend to be the only real helpful/useful set of logs for most of the development process
Constantly pushing & pulling Docker images to a Docker repo throughout the development process can use up lots of time & bandwidth if development testing itself is done on k8s. Even more so if your trying to get your app to just boot/work. It'd be great if k8s was able to use the Docker images of using my local development/Docker host in some way to save on time/bandwidth, but also be able to shorten the window between development interation and testing+debugging
Using the Secrets API resource, but have considered ConfigMaps for usage with Secrets, as they are similar objects with the exception of Secret's running in a tmpfs, and ConfigMap's having the ability to consume updates to the settings if using a volume. Consuming updates to Secrets ala ConfigMap is coming, but until then, ConfigMap's fill this gap if your Secrets are changing on some cadence.
Yes
ReplicaSets are the new standard meant to replace ReplicationControllers and ReplicaSets are created & handled for me automatically by the Deployment API object, so for one I'd like to see that update permeate throughout usage docs etc. as I hardly touch ReplicationControllers much these days.
Yes!
Every microservice, team, testing environment, and deployment target has its own namespace
5
6/15/2016 10:11:35Rolling updatesScheduled jobsNot yet
Er plan to CD Development branches
Services, dbs, persitent disks, replication controllers
Kubectl logsSecure git repo
6
6/15/2016 10:16:17
reliability, simple concepts, IP address per pod, services, etc...
Lack of support for shard id on replication controller, diagnostics for OOM are difficult
noneNA
Local test is still a work in progress.
Jenkins running on k8s for unit test of gerrit reviews; no CD.
Undefined
Multiple shards of a given pod working on different partitions of data + debug console front-end.
prometheus + grafana for monitoring; GCP stackdriver for logs (stack driver performance is very poor)
OOM diagnostics; network connection table (e.g. netstat on pod network namespace).
OOM diagnostics/pod termination status is unreliable. list the contents of a docker container image. Reliable logs in GCP.
kubernetes secretsyes.
Add support to allocate a shard id and perform rolling updates. Yes it creates complexity in the replication controller; but that is preferable from user having to roll out a separate etc cluster just to allocate shard ids.
No
prod, test, user experiments.
7
6/15/2016 10:35:59Deployments, Secrets, Volumes
Creating clusters is very hard (exeptiong being GKE). Parameterizing deployments, managing docker image construction and registries, converting old apps to run properly in containers,
starting to experiment with helm
would like a good local (desktop) development story, but the current solutions have some limitation or another.
Starting to look at Jenkins pipeline to create a CI/CD workflow. Looking towards the "immutable" image concept
Want to version all of my manifests and artifacts. Drives CI/CD process
Some kind of persistence layer. Often need keystores for SSL integration
debugging is hard. kc logs and kc exec /bin/bash are often used. Would love an interactive shortcut for these, or a nice web gui that would show you running containers, and launch a shell window if click on a container.
mostly application logs. Sometimes to troubleshoot networking connectivity (is DNS working inside this container?). Sometime to see what the image looks like (did it pull the one I expected? Are the secrets being mounted correctly?)
As above - I'd love to see an interactive (maybe web GUI) that shows you the running containers, and lets you explore them. Perhaps opening up a shell window to the container, or showing you the stdout (logs) contents.
This is hard. Using shell scripts to generate the secrets (kubectl) - but the whole thing feels quite adhoc and brittle. This is partly an app issue (example: tomcat wants a certain keystore location).
Having the ability to mount a specific secret file (instead of a volume) would make things easier. Right now the volume overlays the existing contents - which breaks many apps. You need to resort to using sym links or copying the secrets.
Right now yes - but I have not yet used it in production.
yes
separation of developers and environments.
8
6/15/2016 10:41:23The API
Lack of support for load balancers such as HAProxy/Nginx (ingress is the right direction though)
Cloud RTI (https://cloud-rti.com)
Production deployments need deployment automation, LB integration and monitoring
Running outside containers (mostly) during dev. Pushing to a K8s cluster from a build pipeline
Amdatu Deployer (Apache licensed): https://bitbucket.org/amdatulabs/amdatu-kubernetes-deployer
We use alpha/beta/prod tags on our docker containers, dual tagged with the exact git versions.
Several containers (containing the different application components), and most apps use a datastore like Mongo. LB configuration, certificates is also important.
Centralized logging (we use Graylog), and the Cloud RTI dashboard which has shows application level health checks (provided by the applications themselves).
This only happens in very early deployment stages (initial deployment): e.g. testing if a volume gets mounted correctly, checking env. vars etc.
A frontend which is part of Amdatu Deploymentctl (an open source tool: https://bitbucket.org/amdatulabs/amdatu-kubernetes-deploymentctl-backend) or Kubectl
yes
Very happy with them actually, also from the perspective of the API
Not high priority. We do use scheduled scaling based on the API though (opensourced here: https://bitbucket.org/amdatulabs/amdatu-kubernetes-scalerd)
Both for environment isolation (e.g. test/prod), but also for multi tenancy. For the latter network isolation is still an issue, but we have high hopes for project Calico.
9
6/15/2016 10:45:12
Ease of starting/stopping new pods and getting data from what is happening with logs and describe commands.
Ingress mappings and load balancing external facing services
none
build/test locally (in a container)
git push -> jenkins build container
Manages yaml files for infrastructure and separate repos for applications
rc, ing
kubectl logs and then remove label and kubectl exec
journalctl -u docker and kubelet
more easily show errors on multi container pods. See when pods are flapping more easily. See access logs for secrets
secrets and env variablesfor nownot right now
namespace per team.environment
10
6/15/2016 10:46:12none yet
getting k8s working inside our networking and security requirements
none yetnone yet
11
6/15/2016 11:00:00
Pod, ReplicationControllers, Namespaces, Deployments, Ingres, Services, the API, kubectl port-forward. All the abstractions are fantastic.
kubectl port-forward is flakey, I'd like to develop in the context (especially networking context) that the container will be running in. I'd like my dev machine to be able to temporarily join a cluster over VPN or something.

YAML and JSON are both kinda clunky in their own ways. (EDN?)

I often run out of resources on my qa/dev/sandbox cluster, though I think my pods are not using what they are claiming. I probably just need to research how to set them to claim things differently but the default causes difficulty in a qa / development setting.
None but I probably will use Helm, though that's not quite 3rd party any more is it?
develop microservice containers locally, would be nice to test locally in the context of the cluster (join via VPN?)
use CircleCI to build - test - push to gcr.io then trigger an automatic deployment daemon running on the cluster that generates RCs and Secrets for each build. I then tweak the result manually using kubectl edit a lot (love that feature too).
Still working through that, but we have an automatic deployment system that maintains a running RC for the HEAD of each branch.
Not really sure what this question is asking. Compute, Memory, Networking and Disk? Replication Controllers though we will probably move to Deployments, Secrets, Services ...
Yes "kubectl logs" though "kubectl exec pod /bin/ash" too.
I haven't really had to recently. but when I did, it was to run `docker images` or `docker pull`. This was before I started using gcr.io
"kubectl join-cluster --do-not-schedule-pods-on-me"
Creating kubernetes Secrets by hand.
Yes
expose more information about the relationships between replication controllers and the pods they replicate. This could be true of all objects that reference other objects. I'd like to walk to path from RC to Service through the pods, or visualize that somehow.
yes
I'll probably use them for environments like prod/stage, I don't use them yet.
12
6/15/2016 11:02:53Cluster, replication controllers and easy load balancerI don't haveJenkins
I like the workflow plugin to prepare the pipeline deploy.
Run local tests, open pull request and wait for the CI
With Jenkins container Control of releasesOnly PodLogentries and Sysdig
System logs with dmesg or strace or gdb.
Something like sysdig or strace
With kubernetes secretsNoAwesomeYes, please!
One for environment and one per application
13
6/15/2016 11:10:53scheduling / auto scaling
breaking k8s APIs as the project moves at rapid pace
helm, deis
ease of use, quick production bootstrapping
PR --> test passing CI --> staging --> manual tests --> prod
travis / jenkinskubectl logslogspersistent storageyesyesall of the above
14
6/15/2016 11:14:53
Daemon sets, Jobs and Deployments make things way easier
We have to write lots of automation for complex application deployments, as there's no logic to make complex build/deployment pipelines yet - e.g. run migrations first, then roll DB on top
none
we write custom shell/kubectl scripts for complex deployments
yes, both locally and on various environment
Jenkins
not a big role, versions are app specific. There's no 1x1 mapping
Jobs, DaemonSets, Deployments, Services, pretty much all of them
kubectl events + logs + influxdb metrics
events, docker ps -a
kubectl can get info from node problem API to display some stats about node
secrets are manually versioned
no, it's missing the whole subset of features for stateful applications
stateless services inside k8s are not the biggest problem in k8s
yesnamespace per tenant
15
6/15/2016 11:20:53Simple Pod yml definitions, automatic Failover/reschedule
Configuring Security, using cni with Weave Breaks nodePort, Service for kubernetes apiserver point from pods to ip of wrong network interface
Puppet (Legacy), ansible, Docker (hyperkube), coreos, ubuntu (Legacy)
Setup kubernetes Cluster on bare metal, the Linux Kernel is handy in case one needs to access hardware
Local development, gitlab automaticly deploys to kubernetes for Testing env, when merged to master then deloy as live
Gitlab
Every commit to git creates a Docker Image with its Hash as version
Importer and rest base api with actuall business value
Kubectl Logs, elk Stack, ssh to node and Docker Logs when a Pod refuses to get deployed
Reason a Pod does not get scheduled
No idea
Kubernetes secrets or hardcoded in the yaml of the pod
Yes
Priority for stuff: deploy my internal Docker Registry before trying to deploy my App from there
Yes
One per customer and one for the tooling ci pipeline
16
6/15/2016 11:27:25jobs, kubectl
immature ecosystem (few tools), persistent storage is a huge pain, secrets are insecure (with no better alternative)
helmIt works, it's awesome.
I do local app testing on a local cluster, then deploy to GKE.
I wish there was a good way to do this. Right now, I don't.
"Version Control" as in VCS: Use git to manage manifests. A huge pain point is not being able to adequately version control docker images. Tracking a Dockerfile is not sufficient.
Database, cache, proxy, webserver -- which translates to pods, replica sets, and something for persistent storage.
SAVE ME!!!!
systemd logs, and I wade through megs upon megs of data
There are debugging tools for Kubernetes? All I know about are various ways to get some bits of the log data.
Manage secret data externally, and inject them into secrets using Helm and templates
Yes
Aside from horizontal autoscalers? Yes, it would be nice to mark things for autoscaling memory, CPU usage
Per team (e.g. website team) or per app.
17
6/15/2016 11:31:59Rolling updates and scaling out nodes
Debugging and reviewing logs from multiple nodes at once
NoneN/a
Developing and testing locally is independent of the containerizarion process. Pushes to production is where containers and kubernetes is used for
We are currently working on the automation to use the kubernetes capabilities
A large role since we are leveraging the rolling updates for production deployments
Storage, indexing, business logic all running in a 10 node cluster
Kubectl logs is the primary mechanism we use for debugging the app. This can prove difficult in a multi node architecture
Logging detail from the specific service for errors or application specific details
Being able to find a specific log or transaction throughout the entire cluster to more quickly find information when troubleshooting
Using the secret storage Yes
This is already easy for our team to leverage
YesPer environment
18
6/15/2016 12:00:55The auto replicationRolling updates that time-outMy owns
Deploy, update and check
I have a vm with the same pods locally and a k8 env
My own appLogs and describe
Connect to my mysql ok kill pods
YesYesGod yesNo
19
6/15/2016 12:15:43
Deployment and rollout strategies, volume claims, secret management
Having storage equally dynamic as K8s, authentification is not documented too good
Deis Workflows
Deis Workflow Provides a great interface for ad-hoc setups
4-6 stages pipeline - local development, 2-3 integration stages, 1 load/performance test, preprod + production
Jenkins
Critical, all K8s YAML configuration is under version control along with the application
Service,Deployments,Pods,Volumes,Secrets
Lab environment + aggregated logs within Logstash/Kibana
Connectivity to the outside
kubectl logs from all pods within a deployment/filter, documentation on debugging know issues would be nice,
With native K8s secrets
Yes if affinity leaves alpha
Deployments work well for us
Good to have but not mission critical
one per environment
20
6/15/2016 12:16:24Declarative application state; service discoveryRecurring single-task podsgcloud CLII use GKE
Docker-compose locally and deploy to kubernetes for production
Don't use CI yet
Script to update the image tag in the deployment YAMLs and create a git commit and tag for each version. Also version control all of the YAML configuration files for resources.
Deployments, services, ingress, secrets
kubectl logs and describe
Manually create with kubectl create secret --from-file
YesYes
21
6/15/2016 13:09:43
I like the deployment abstraction. It's easy to change a version and it automatically makes a rolling update
It would be nice to have different deployment strategies
We think about using helm
The idea is to provide complex, but packageable systems to provide systems like github.com/zalando/spilo as appliance to teams
We will use the API to deploy from Jenkins
We use git to save, share and version our deployment manifests
This is very different, sometimes one container with small sidecars or a couple of microservices with a database cluster github.com/zalando/spilo
As application developer kubectl attach, and as SRE I would possibly add a debugging container to the POD I have to look into.
It's only needed for hardware, disk, I/O or network problems, so tooling analyze these parts would be great
Kubernetes secrets and a system that downloads and rotates keys from an object storage (S3, possibly GCS or something else ). It would be nice to have key rotation possible in kubernetes.
We have not yet production load, so I can not answer this. I guess there is a need for different use cases to get different schedule strategies.
We have deployment in kubernetes, why should I care about rc?
Auto-scaling is a really interesting feature for us. In the bare metal DCs as in cloud providers like AWS or GCP. For cloud providers a twofold auto-scaling should work (nodes and number of replicas should scale up and down ).
We use a GCP project per team and GKE as kubernetes cluster. For environments we would use a different cluster within the same project. Namespaces we use to separate system containers and application containers. It's up to our users to create different namespaces for the to structure their services.
Some time ago we thought about huge clusters and separating teams with namespaces and use calico to enforce network separation.

For questions related to my answers, feel free to message me in Twitter @sszuecs
22
6/15/2016 14:40:13
I like being able to use a single tool (kubectl) to manage all aspects of the cluster

I also like the edit-on-the-fly features for specs
There are documents focusing on cloud installs, but almost nothing for people wanting to run kubernetes themselves on phyical or bare metal installs
Deis
We use Deis because it is developer friendly.
dev is all done locally with docker-compose
All CI/CD is done through git hooks using semaphore
Huge. Deis uses git sha's to tag releases
Storage/Compute/Routing
Kubectl for sure
We run CoreOS, so if I have to go to the node, I would be running journalctl to get the logs of the kube services
we use the kubernetes secrets
yesYES.Yes
Deis creates namespaces per application
23
6/15/2016 15:10:29
pod as atomic unit, replication controller that can detect dead pod and replace it with the new one.
try to determine what is dockerized and not dockerized
saltstack
salt is pretty useful to keep the configuration management going as well as an event bus for CI/CD.
yes with docker-compose. ideally hope there is kube-compose which does more native k8s.
Salt events
It is the truth and input for the pipeline
N/A
kubectl exec pod <pod_name> -c <container_name> -it bash
for logs inside the pod
probably logs gathered from different nodes.
pgp encrytedyespositiveyesone namespace per app
24
6/15/2016 18:54:18kubectldoc for bare metal installationfleetctleasy to use
in-house CI/CD and test locally
another dedicate team is doing CI/CD for k8s
everything is version controlled
external databasekubectl execsystemctl statusnetwork bw status
app dev handle it themselves
So far okforce dev to accept rcnot yet
one namespace for all. k8s need better authN/authZ to make namespace useful in the future.
25
6/15/2016 19:56:40Deployments API
Variable features depending on environment/cloud
Deis WorkflowSimplicity for developersDevelop on a dev clusterJenkins
Only version the app, no config or manifests
Usually Service + Deployment
kubectl logs, kubectl exec
Don't much
Namespace-wide logging, etc
EnvvarsFor now, yes
Deployments API is enough for me
Yes!Per app
26
6/15/2016 22:39:17rolling deployments, service discovery
provisioning and updating clusters, configuration & secrets management
Ignition, prometheus
missing functionality, UI is not mature, monitoring & alerting out-of-box needs lots of work
jenkins -> test -> staging/canary -> prod (all cloud)
jenkins & scripts
working on getting deployments and configMaps into git (tracker branch per env)
configuration, secrets, persistent volumes,
UI logs (no Deployments in UI!), kibana, grafana. installed vpn inside k8s for dev access to pods & services (e.g. to connect GUI clients to kafka/cassandra clusters)
usually don't need to, maybe just to verify configuration of node. mostly connect directly to containers using kubectl exec where needed.
make configuration easier to use (Helm?). most issues stem from initial setup & config of the resources. especially configuring 3rd party images to my needs
local storage, looking into vault. generating k8s secrets on cluster setup
like Deployments & kubectl apply to root folder. experience is not too good. takes lots of time to first configure resources.
YES
namespace per environment
27
6/15/2016 23:44:59
How easy it is to maintain large scale infrastructure without any dedicated DevOps
Using variables in templates. Now using seg to do that. Would be handy for updatatig just the image when redeploying
Google Container Engine
All of our infrastructure is at Google, and it's easy to use
We run just docker locally for tests
Yes, we deploy via Jenkins which calls the kunernetes api
Quite big. We have one c++ app that changes it's input parameter set when a new version comes along. This is all automated to work with our other apps though
App, https redirect, mongo, redis
Google cloud loggingI never do!
None for me at the moment
Secrets are mounted in a volume that's in container engine
Yes
Only using deployments now
Yes!Per service
28
6/16/2016 0:09:09Kubernetes API
External exposure of K8s services in different cloud environments
Bamboo
test/deploy locally + CI/CD pipeline
yes
configuration & source code management
web servers, datastoresgraylog
logging, memory consumption, files
k8s secretsyes
they are called replication sets nowadays? :-) we use the new k8s deployment style. We quite like it over the old replication controllers
yes. both on the application side as on the underlying hardware / vm's
one per environment
29
6/16/2016 0:37:44deploymentsKube-DNS resiliency to a node crashsysdigbetter monitoring
develop and test locally with docker compose => push => CI => auto-build => deploy
we use CI (gitlab-ci now) and quay for building images based on webhooks, we still deploy manually
everything goes via git and ci
DB
central logging (graylog in our case), kubectl logs, journalctl. debugging is difficult
dmesg, journalctl, /var/log, ps, pstree, top, df
when a pod is rescheduled from a node to another, history seems to be lost, in general being able to see history of pods and last 1000 lines of logs before each restart would be nice.
with kubernetes secrets
troubleshooting is not easy, docker can misbehave, dns can misbehave, that makes it hard to maintain things stable.
we use deployments and like them a lot
yes, ideally based on something else than heapster (it is not reliable for us)
we use it for environments
30
6/16/2016 1:37:45
Automatic re-deployment by simply updating the docker image in the Deployment definition
it's all through command line, the dashboard needs improvements. Also, Continuous Deployment is not very easy (we need to have somewhere the YAML template representing a deployment and do some string replacement and apply the YAML). There is no real solution for CI.
Also, the logging and monitoring side is not documented enough. It should be easier to setup some centralized logging, with more examples.
None, we deploy through kubectl
we develop and test locally, then our CI build automatically a docker image when a commit gets pushed to master.
We're still struggling to automatically update our deployment descriptions with new Docker image. it should be easier.
No real role on our side, the commit hash is put in the docker image name, that's the only relationship between our Git setup and Kubernetes.
ConfigMap for configuration, and some secrets.
Probably log to the node unfortunately
processes running, system logs
not sure yet
locally :( There is only one person managing the secrets, and there are on his computer.
Definitely
I think its pretty good the way it is
Not at the moment
environment: staging, production, kube-system
31
6/16/2016 2:19:18Deployment, with custom wrapping
barely any CLI operations to deployments "as a whole", e.g. "tail the logs from all my instances". would love to "prepare" a deployment before going through with it (stash the definition in Kubernetes before going through with it) – we currently stash them as json in docker containers. separate lifecycle of deployment and services – basically impossible to roll out incompatible label set / label selector changes safely.
something homegrown to integrate with other build tooling we have
enforcing conventions and consistency; safety (kubectl delete ns --all is way too easy and fast)
we currently have 3 distinct setups for local (unit) testing, integration testing (docker-compose) and deployment (kubernetes). running a local k8s cluster for dev is very heavy-handed and at the same time resource constrained (apps regularly request more memory than available on a dev workstation). looking at namespaces for integration testing.
In our CD pipelines, we produce (in parallel) Docker images for the application, and a "deploy" (which under the hood is a docker image with kubectl, a few scripts, and json files for the Kubernetes API objects). Later, in the deploy stage, we execute the deploy script from the latter, which `kubectl apply`s the json files in the correct namespace. Afterwards, to make deploys "synchronous" and give feedback in the CD interface whether a deployment succeeded, it reads the number of replicas from the API and waits for that many ready pods with the new label set to appear.
We commit the podspec, and parameterized invocations of our wrapper tools. The tools read and parse the podspec, and wrap them into a Deployment object. The commit hash and count as well as the pipeline run number are used to tie together build artifacts, deploy containers, and deployments.
Namespace
Service
Deployment
Ingress(es)
metrics + kubectl logs + kubectl exec
kubelet logs, kernel logs
`kubectl exec` in the process namespace of a running pod/container, but with a different (richer) image; ideally with the actual container filesystem mounted under some path.
`kubectl logs` but for all pods in a deployment / with a label selector.
We bake a GPG encrypted configuration file into the application image, which in production can be decrypted by calling out to an oracle service. Currently starting to integrate Vault.
So far yes – we only support 12factor-ish applications right now, so our requirements are relatively low.
We only use replication controllers / replica sets through deployments.
Yes, but low priority. Mostly to deal with services that can get unexpected load spikes due to very popular content.
One namespace per "system" (application) and "environment" – e.g. "foo"/"production" -> namespace "foo"; "foo"/"staging" -> namespace "foo-staging"; "frontend"/"production" -> "frontent"; "frontent"/"pr-1234"-> "frontent-pr-1234" (short-lived); "bar"/"green" -> "bar-green"; "bar"/"blue" -> "bar-blue".
Within each namespace there may be several "components", e.g. "api", "cache", "worker", in the future "db". Goal is to eventually support making a "full" clone of a system as a new environment arbitrarily. Missing for that currently is a way to "plug together" different environments' systems – e.g. "frontend"/"pr-1234" may want to use the api from "foo"/"staging" without explicitly different configuration; basically cluster-level dependency injection.
32
6/16/2016 2:34:13Deployment controllersKubectl authentication (key distribution)Internally built tools
Keep labels consistent, limit damage from improper use of kubectl
Build,Test,Fix cycle (unit tests), Continuous Integration (want support for local clusters + deployments), Artifact publishing, Deploy to Canary, Validate Canary, Deploy to Production, Validate Production.
Namespace per pull request for one project.
Versions derive from CI artifact management and not source control. Everything gets labeled with the build version.
Service APIs, Admin APIs. Database, Cache, Batch jobs
kubectl get pods -l "match" | cut -f1 | xargs -L1 kubectl logs
dmesg, iostat, vmstat, mount
Vault and Internal Oracle
Host based volume affinity would help for host based volume reuse.
Deployments with GreenBlue strategy rather than RollingUpdate
Not for the datacenter
Namespace per application (Bounded Context)
33
6/16/2016 8:20:37kubectl apply
It should be easier to install a production cluster.
Jenkins and drone.io
because we need a way to deploy apps
once we commit, build docker, test docker, push docker image, update deployment yaml, apply deployment yaml, pray
we use jenkins + gradle plugin that creates namespace, manages yamls, rolling updates, etc.
it's a bit tricky. building an app and deploying an app seem to be 2 separate things. There's not an easy flow from pushing images to deploying those images.
replica sets, services, secrets. All goes into its namespace.
checking logs, events, praying, swearing, then at some point you go and put zipkin and life changes.
docker ps. Usually is docker misbehaving. disk space... we should have a much better way to define nodes health checks.
Application monitoring and logging should be part of the core functionality. Not saying part of kubelet, addons are ok, but we do not have a standard solution.
Checking logs from dead containers is very important... we should have a much better/easier way to get info of why things fail. Oh dear CrashLoopBack :)
I think that a Pod should be able to aggregate all the logs of all the containers it runs, so we have 1 place to go and check.
we don't, we use Vault. store vault tokens as secrets.
No. We should be able to define what kind of scheduling we want. "Schedule this in this machines only" or "Place 1 pod per node that matches this condition"
see above, we should be able to tell the RC how we want to schedule the different replicas.
It is, but it should be smarter and let you know how the limits affect the autosclaing
1 ns per app and env.
34
6/16/2016 8:28:22Deployment svc service account rolling updates and moreMonitoringNone?
No test deploy on gcloud ci cluster and move it fwd to staging and prod
Jenkins every night create cluster from our latest, on which we run e2e tests if they succeed we commit this images
We commit our docker images tag after every nightly successfull cluster
Several proprietry dockers and svcs
Kubectl log attach describe get pods and elasticsearch
Almost never go to a node
Dont knowDont knowNoYes1 for env
35
6/16/2016 19:32:17Resources (Deployments, Services, Secrets, etc)
Templating, app definitions or easy CI/CD
Kontinuous and our own Dashboard UI. Planning to use Helm
More control and flexibility
Commit code to GH which triggers a pipeline in K8s with Kontinuous. The pipeline covers image build, testing, publishing to repo, deployed to staging namespace and then deployed to prod namespace after approval
We use Kontinuous for CI/CD
Critical. Git is core to everything. Images are then built from that and versioned with labels
DBs, Backends (Go, Ruby, etc), JS frontends. We also manage data storage services (independently to an App but running in K8s)
Resource events (kubectl describe) followed by container logs (kubectl logs) and metrics (heapster). Starting to consider weave scope
Try to avoid it unless the node has serious issues.
Better metrics integration in kubectl (kubectl stats?). Basic 'anomaly' detection - logs rate, flapping, etc (shown in kubectl describe for resources?)
Base secrets without values are stored in source control. Values are templated and updated before deploy.
Yes
We use mostly Deployments these days. One improvement we'd like 's for canary deployments - to be able to roll out a percentage or number of updated resources
Yes
Typically single namespaces per app. We're starting to explore the idea of multiple environment namespaces per app for a deployment pipeline (eg. myapp-development, myapp-staging, etc)
36
6/16/2016 19:35:56kubectl run
building, editing, and managing config files is terrible
hand built stuff
editing and maintaining config files is really hard, it is getting better with configmaps
coreos-vagrantjenkins
git checkin of configs before deploy
kubectl logs, external logging
networking is non-obvious, this doc is helpful: https://coreos.com/kubernetes/docs/latest/network-troubleshooting.html
checked into git encrypted with public keys
currently, yesnoenvironments
37
6/16/2016 19:46:46
I like the kubernetes jobs to run stuff and automatically shuts it down after it's done. This is really helpful to perform random test for the application.
making sure that the deployed application is running and thoroughly tested.
AcalephStorage/kontinuous is making things easy for me. you just have to define pipeline.yml (build, command (testing), publish, deploy) and it'll deploy your application in kubernetes if it's successful.
It's the CI-CD that I need. I don't have to do things manually in the cluster.
yes. Test and deploy it locally. Make sure it's not gonna break.
I'm using AcalephStorage/kontinuous
important. some versions may have the features that I need.
in a single app I need the deployment resource, service. Just to make it work
kubectl logs if resource is running. I usually do kubectl describe if resource status is pending.
look for running resources inside that node
via namespace. yes.
i would like to see the pod resources and services under replication controller
yes
development, release, production
38
6/16/2016 19:58:36kubectl and the api
mostly when needing volumes and not using cloud providers.
kontinuous
we use it for our own local tests and deployments
basically a modified gitflow. we also need a build that can work locally on a dev's machine as well as on a CI/CD server. deployments can also work locally too. Makes it easier to move from different tools.
we're trying to build kontinuous for this.
it plays a role. using git flow, develop branches are deployed as "dev" applications on k8s, master deploys prod.
replication controller, service, secret
kubectl logs, kubectl exec, kubectl describe, and sometimes manually digging in to docker
how k8s is configured for that node. also docker. and the pods deployed there.
alerting. visualisation of the resources. looking at failed/terminated pod logs (had to use docker for this sometimes).
using the secrets resource
yeswould be goodyes
it depends really. right now we're using as one namespace per application. planning to try out the namespace per environment too.
39
6/17/2016 6:46:49daemonsets, replication controllers, autoscaling
At this point nothing major (early prototyping).
Ansible, OSGi container + implementations from compendium
Ansible to automate the docker+k8s setup. OSGi for composing the application (java) from modules and dOSGi functionallity (aries-rsa)
At the moment only prototyping but local test/deploy will be required.
Not yet. Planned.
Dockerfile and kube-conf are stored on VC.
At the moment only apache karaf features.
kubectl logs
Until I discovered kubectl exec I used to do docker exec and docker inspect to look at log files.
Not yet.At the moment yes.
It would be nice to be able to more easily customize the default behavior. For example lost node detection is 5 min which is kind of long for a demo. Finding the flag, finding which other flags need to be changed and doing the actual change took me a while.
Yes.per environment
40
6/17/2016 12:30:33Quick and easy scaling
The initial setup of a service is a bit cumbersome.
OpenShift, GitHub, and GitLab
OpenShift provides authentication and authorization for users, and GitHub and GitLab are convenient places to host our code.
I typically test the software locally and then manually trigger a build in OpenShift when the tests pass.
I have a Jenkins job to build the code automatically and a git repository that contains a Dockerfile and build scripts. OpenShift builds use the Dockerfile and build scripts to download the artifacts built by Jenkins and create an image from them.
Some of our apps use git tagging to decide what to deploy and where. (e.g. tag a commit as "prod" to deploy that commit to production)
Typically just the code
OpenShift has its own client, which lets me view logs and start a shell session inside a container. These two methods are my go-to methods for debugging failures.
I typically look at disk space; there's a known Docker bug that causes deleted files to be held open, consuming disk space needlessly.
I can't think of any off the top of my head.
OpenShift has a secrets mechanism that lets secret data be published to apps either in environment variables or as files.
Yes.
I sometimes see two instances of the same pod running on the same node, and I don't know why.
Yes, but it's not a very high priority.
We use namespaces for the environments for our main set of services. For random one-off services, we typically use one namespace per service.
41
6/17/2016 14:58:21Deployments, ConfigMapsTemplating, automatic updating
Test in VMs, push to staging, push to prod.
Jenkins orchestrates via the k8s API.
huge role
RC, ConfigMap, Secret, Deployment, Service
kubectl proxy, kubeclt logs
check on docker
easier context and namespace switchcing
decrypt from private repo, format ask k8s secrets, push to k8s API
No, more advanced scheduling options are needed.
Know what RC generated a pod would be nice. This used to be a feature but was removed.
yes
1 per engineering project/team & env. i.e. widgets-app-prod and widgets-app-staging
42
6/17/2016 15:06:50
I really like deployments, jobs, configMaps, and secrets. Everything else can be built upon these constructs.
Sometimes bootstrapping/initialization of applications is hard (first time deployment). Init containers and better pod life cycle hooks would really improve this.
Jenkins, and various scripting languages (mostly bash)
Flexibility. Jenkins lets me have a single places to deploy apps from, and lets me do extra customization prior to submitting resources. I plan on using helm when it get's more stable to help handle dynamically constructing k8s manifests, as this is also something that would be useful for us.
I 100% need to be able to test/deploy locally. Today I run coreos-kubernetes with a high resource single node cluster. I then use docker build against an exposed docker socket to rapidly iterate on container images. I plan on deploying a docker registry so I can use more nodes & push directly to that instead of my current flow.
As mentioned before, Jenkins is what we use to actually deploy, upgrade and configure our applications. It is the single source of truth, and allows us to have a consistent deployment experience.
Today we version our Kubernetes manifests in git, and some secrets (encrypted as well).
Deployments, RCs, RSs, secrets, configMaps, daemonsets, jobs. In the future: period jobs as well.
Aggregated cluster logging (kibana, fluentd, elasticsearch), prometheus + grafana for metrics and investigating resource contention/problems.

I almost never use kubectl logs in production.
I rarely ever do this, but I often look at kernel logs, number of open connections/sockets. I might look at resource utilization, but prometheus does a good job of giving me what I need, or exposing it.
tools which can tell you if Kubernetes itself is having problems. alerting integration. audit logging. having a better story for persistenting events would be ideal. having more types of events, and richer events would also help a lot.

obtaining access to a node via kubectl would be great (kubectl ssh <node> or kubectl ssh <pod> would get you to a node a particular pod is on).
store them encrypted in git, decrypt them in CI and create/update existing secrets. however it would be preferred if there was a secret backend which managed all of this for us.
for the most part. having the ability to control "spread" would be useful. however with federated clusters that may be enough.
I think deployments/replica sets have gotten me to a point where I'm happy using them. I don't use RCs anymore except for apps not migrated to the new resources.
Not really.
One namespace for each "environment" per app/site is common. Having a namespace per set of components ("montoring" namespace, "internal" namespace, etc). Not much authZ in namespaces yet, but this will change with RBAC.
43
6/17/2016 16:52:15
The ability to schedule a container for use, being managed and monitored by k8s.
Deploying on CoreOS! hah. Also, and I know you guys work hard on this, but it's a difficult sell to colleagues; the learning curve is steep.
Travis, Google Container Engine, Google Container hosting thing, coreos, flannel, calico, rkt, gcloud.
All through the stack.
Yes. We built a single master/node cluster.
Pretty much everything is deployed via CD. It made that quite easy.
VCS kicks off the deployment. Master is deployed every commit.
DB, in memory cache, language interpreter, webserver. Use namespaces as "walls" between apps.
Kubctl logs aha. Also, reading the kublet logs on the nodes has been incredibly helpful.
Kubelet logs, sanity checking manifest run serviced are OK.
I can't remember finding it easy to find the "last dead pod of {x} type'
K8s secrets, stored in VCS encrypted and decrypted by the CD prior to dep.
Yup.
Deployments are a massive, massive help.
Yes. But ideally on custom metrics -- is the queue on PHP getting too big (for example)
Per application. I don't think there's network segmentation between environments, and developers are lazy with staging.
44
6/17/2016 16:55:54
Pretty much everything is done via deployments, with some hpa on top
Understanding how to get it working in the first place was tricky. Now the biggest limiter is stateful apps like Redis (I understand PetSet is coming soon; hopefully that handles it for me)
... docker? Nothing else really
Because rktnetes isn't a viable option yet.
I'd like to be able to, but we haven't gotten to that point yet.
Automated build testing, and then packaging up after green.
commit to SVN/git, test goes green, then deploy green versions as desired
Unsure what this is asking
I use kubectl describe a lot. I send logs to ES+Kibana, including k8s events.
If I'm on the node, it's usually to do tcpdump or other inspection of network state. Sometimes to inspect a running FS or strace a process.
I'd like a better log of what caused a restart in a pod. Maybe this is already in events and I'm just not seeing it.
Separate git repo holds json files for k8s secrets. I'd like to use Hashicorp Vault at some point.
So far
I don't use RCs. I'd use ReplicaSets, but I never don't want Deployments, so I use that instead.
Yup. I use HPAs for a few things as it is. It would be interesting to auto-power-on/off hardware, but we're on bare metal, which could make that trickier than, say, AWS or GCE.
Right now most things are in one namespace. I have plans to use one namespace per test instance once I start building out more test infrastructure.
45
6/17/2016 18:33:49Kubectl apply -f app.yamlYaml spec docsTerraform for was infrastructure
Kube-up didn't work in our environment.
Test using docker locally. Or using tags in cluster.
Jenkins deploys to cluster
Keep resource specs out of version control currently.
Ingress, service, deployment
Yes, kubectl logs. Also running fluentd and elasticsearch
Check network ping, diskNot manyUse secretsYes
Better docs on the specs.
Yes
Only to separate system and user services
46
6/18/2016 0:24:11kubectlrolling updateshelmease of deploymentYesgitlab-ciA big role.kubectl
weavescope
docker ps
iptables
top (cpu, memory)
Something like weavescope
yesyes
Not using namespaces. Environments separated by Google Cloud "projects".
47
6/18/2016 0:54:28Deployment API, events, kubectl
Debugging can be tricky due to the number of components involved; the dashboard UI needs to ramp up (still no Deployment API support); statistical load balancing is inferior in the case of pod failures compared to stateful, intelligent routing
None: we have extending an existing deployment tool that was used internally already to make the transition smooth
Teams develop locally agnostic of Kubernetes most of the time (12factor FTW). Testing the wiring between different components happens on a development namespace in Kubernetes. Most of the complexity in the standard case is covered by our internal deployment tool helper abstracting from Kubernetes (and Marathon/Mesos, what we are also still using)
Every commit runs through Jenkins, executing unit and integration tests. At some point, automatic deployment into Kubernetes via our tooling is triggered, and teams may run additional tests in their team-specific namespace
We check in all specs like every good Kubernetes citizen should do. In fact, deployments to production are tightly coupled to our git, so no one can deploy without proper VCS in place
Right now only high level Deployments (and everything they create implicitly). We plan to hook up Jobs too and eventually run with PetSets as soon as they have matured enough
kubectl logs | events | describe; curl; stackdriver (we're on GKE); existing documentation; sshing into machines
Mostly Docker inspections
Abstractly speaking, any kind of tooling that helps me debug issues across multiple objects/resources. For instance, a Deployment not working can be broken at any point in the chain container/pod/replicaset/deployment
We plan to use the Secrets API but are yet running something internal (less desirable)
So far yes but we haven't invested too much time in this regard
See my debugging worries before
Absolutely, especially involving nodes
We run namespaces per environment and per team
48
6/19/2016 19:44:42
Integrated logging, monitoring, and telemetry through the Dashboard and API is just darn fancy and nice.
My company is a healthcare startup that handle PHI—as such we're subject to HIPAA. I wish there was more tooling for environments with high security/compliance demands. On AWS's EC2 service for example, PHI services need to be on dedicated EC2 instances with encrypted volumes (amongst other requirements).
Deis Workflow
We evaluated Deis Workflow as an PaaS option with AWS. We stumbled into Kubernetes through Deis!
Currently we use docker-compose to run apps and tests via Docker. Eventually we'd like to explore using k8s in development, too.
CircleCI
Version and release management is huge for us as its part of auditing and compliance demands for HIPAA/HITECH. At the moment we're using Deis Workflow to manage it.
PostgreSQL or MySQL DBs, Redis, nginx.
Yes—especially on AWS
In combination with Deis Workflow, one namespace for: 1) each application managed by Deis 2) deis namespace for deis management 3) kube-service namespace for kubernetes cluster management
49
6/20/2016 9:25:11Stateless web applications
There seems to be no established best practice to deploy a high availability postgresql database cluster (or similar software) and it is unclear if kubernetes is a good approach for such database setup. Most examples I see are for single database instances which are great for development. For now I am sticking with running postgres on it's own VM without kubernetes.
Started using google cloud (want to try coreos tectonic but with my choice of private repository)
Great way to get started and they had a free trial
yes, my apps always run locally but not using kubernetes locally yet (need time to work on the setup)
Not there yetNot there yet
Stateless webapp (angularjs / playframework), REST APIs (play framework), elastic search and database (no idea how to run a postgresql cluster in kubernetes so running separately)
Just the logs, I am building my capabilities on this. In the past I instrumented my apps with newrelic and will look to set up something similar again.
Sorry, too new... but event log aggregation for tools such as newrelic or sn open source alternative would be great.
50
6/20/2016 12:02:13Services, Replication Controllers, Jobs and DaemonSets
Kubernetes itself doesn't yet provide high-level enough primitives to easily deploy applications. Also, to build software that accesses the Kubernetes API requires pulling in a *massive* codebase just to access the Go client.
Deis and Google Container Engine (GKE)
RE Deis - I use it because I work on the project. RE GKE - I don't have to worry about any operational aspects (the master, etcd, etc...)
I'd like to, but pushing images to quay.io and using GKE is simpler over the long run
We use deis, so simply publishing to a docker registry and doing a `deis pull` works
all code is checked into VCS, built into a container, pushed to a container registry, and then those images are deployed to Kubernetes
a service, replication controller, DaemonSet and sometimes 1 or more jobs
kubectl logs -f' and 'kubectl exec' for the most part
I don't SSH into nodes ever. If I run 'kubectl exec', I usually check that a server is running by 'curl'ing to localhost
log aggregation for all pods in an RC would be nice. also, streaming the status of pods that a service abstracts over would be nice
I use Deis configs for that right now
Yes
Probably by adding the aforementioned logging & debug features on RCs
Yes
One namespace per application at least. I usually use labels for prod/staging/test/etc...
51
6/23/2016 19:26:10declarative yaml configuration, containers
trying to shoehorn MPI & other traditional workloads into a kube worldview (not kubernetes' fault per. se.)
none
test locally on minikube, deploy on vsphere
haven't done it yet
keep all the yaml files in git, but create/deploy manually
compute, storagekubectl logshaven't done it muchunsuresecrets
yes, so far, although I'm hanging out for ubernetes
no suggestionsyes!not really using them
52
6/24/2016 7:02:18
Auto scaling. Being able to quickly fire up more instances is nice.
Defining multiple port ranges for services. FTP services that involve passive connections (Which require a large port range) are almost impossible to build on Kubernetes.
makefiles
I use makefiles mostly to help automation with deploys.
Yes, testing locally is huge to lower time in between debugging sessions
Not yet, but plan to
GIT manages all yaml files
FTP Server, NFS Server, HTTPS front end, and Multiple HTTPS back endpoints
kubectl logs and kubectl exec -it pod bash (And do seperate logging there)
If environment variables are setup correctly, debugging fine grained things (Like a seperate SQL log for proftpd)
Still knew to kubernetes so haven't run into any huge pitfalls with debugging yet.
Set it, and forget it.Yes
N/A (I've only used deployments ATM)
Absolutely
One for environments like production/staging
53
6/24/2016 7:08:12replication controllers
Giving Pods routable IP addresses is important, and should be explained and taught early on.
nonekubectl logs
Depends. "docker ps", 'ip route show", "systemd status kube*", "iptables -t nat -L", ...
Some easy to use iptables visualization tooling tailored to k8s would be great.
This is hard, and always will be. At some point, you have to come to the turtle on the bottom and embed a real secret someplace where an app can use it.
Maybe. auto-anything is not a panacea for human judgement.
one namespace per app
54
6/24/2016 7:08:39Pods (as a group of related containers)Sometimes to much yml codeAnsible, baahAutomate yml creation
Build a container and launch it on a testing pod
Bash scripts
All (except secrets) is version controlled
Kubectl logs kubectl evenys
I don't go too much to it
Secrets volumeMore or less
Merging them with pods ( they are too related)
YesNot using them
55
6/24/2016 8:01:15
deployment objects and configuration based kubectl apply commands
NotReady nodes and networking complexity
Build management tools, docker private repository
Need support for docker image creation, private repositories
No local test/deploys. Non-production k8s in cloud.
Build images using CircleCI, upload images to s3 backed private repo. Pull images from s3 repo.
Versions managed mostly with git commits hashes. Docker repo tagged with both git commit hash and staging/production repo
Just pods
local application logs. because we have several different application logs, putting them all in stdout for kubectl logs doesn't work well
local application logs
cp files into/out of container
distributed grep or run a command on all containers by label(I wrote a simple script)
Don't use secrets. Too complicated for not enough benefits
Yes currently. Documentation on Node Affinity or node spreading and rack-aware configuration could be improved.
Yes, but not a high priority. We manually configure the AWS ASG to scale as needed.
Currently one per environment. May move to one cluster per environment for network and production isolation though.
56
6/24/2016 9:46:39replication, kube-up elastikube
Gets the security and usability to kubernetes
Build Microservices
Not yet reached the desired CI/CD level. Jenkins used along with Kubernetes
very important role including application versioning along with code
Network port expose.
kubelet logs are important along with container built logs
get pods
more intuitive launch failure reasons.
Elastikube No.
absolutely. at the same time it should left to the dev/op to decide on criteria
one for tag
57
6/24/2016 10:17:16
The replication feature is awesome. I love the mechanism where you can roll in a new deployment of code and the spinning up of a new, replacement pod when one dies.
My biggest issue is tools for troubleshooting when something goes wrong. Particularly since my group provides a Kubernetes-based platform to customers for application deployment. The customers aren't going to be Kubernetes experts so continued effort toward easier, better ways of getting log and debugging information would be helpful.
Red Hat OpenShift
So it is easier for our customers to deploy their applications and self manage their pods and containers. Web UI and added levels of access control make it easier to provide a multi-tenant solution.
We have customers who write code for their projects but aren't really developers so source control isn't part of their workflow. Being able to deploy code directly from their source code directory would make things easier for them despite the lack of version control.
No. We might be doing more with CI (specifically Jenkins) in the future.
We use a locally hosted Git repo service as the source for deployments. Our customers are a mix of true developers and others that work with code. Some are already used to using a source control system and others are not.
A container with the programming language and web server, persistent storage, databases, connectivity to other containers where resources might be shared between applications.
Logs is the biggest one. I'm not familiar enough with other debugging techniques and am interested in others (or easier access to documentation for others).
Checking connectivity, looking at and temporarily changing code in the container (if possible) to test differences from what the container "thinks" is supposed to happen vs the local application/
Again, I'm less familiar with the available tools other than 'logs' so any additional tools or documentation on such tools would be great. Since we use OpenShift as our front-end for Kubernetes, we have issues troubleshooting customer pod failures during application builds or deployments because 'logs' doesn't provide enough debugging information and the pod is deleted after the failure.
We use OpenShift which extends and exposes the Kubernetes built-in secrets command.
So far it appears to be sufficient.
It is fine for what we use it for today.
Very much so. We are using the auto-scaling feature that OpenShift provides/extends but it is slow at gathering metrics to determine if it needs to scale up. A pod could be overloaded before it determines it needs to be scaled up.
We encourage our customers to use one namespace per application/service. The way we provision namespaces through OpenShift for individuals, they may reuse a namespace for multiple applications (particularly for the ease of sharing services between related applications).
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100