使用脚本便捷地在 Ubuntu 中安装最新 Linux 内核

Standard

原文:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/作者: Ji m
译文:LCTT  https://linux.cn/article-6219-1.html译者: mr-ping

--------

想要安装最新的Linux内核吗?一个简单的脚本就可以在Ubuntu系统中方便的完成这项工作。

Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或者低延时版的内核安装到 Ubuntu 系统中。这个脚本会在询问一些问题后从 Ubuntu 内核主线页面 下载安装最新的 Linux 内核包。

通过脚本来安装、升级Linux内核:

1、 点击这个 github 页面 右上角的 “Download Zip” 来下载该脚本(注:此脚本在墙外,我已经搬运回来了,请参见下面。)。

2、鼠标右键单击用户下载目录下的 Zip 文件,选择 “在此展开” 将其解压。

3、右键点击解压后的文件夹,选择 “在终端中打开” 到此文件夹下。

此时将会打开一个终端,并且自动导航到目标文件夹下。如果你找不到 “在终端中打开” 选项的话,在 Ubuntu 软件中心搜索安装 nautilus-open-terminal ,然后重新登录系统即可(也可以再终端中运行 nautilus -q 来取代重新登录系统的操作)。

备注:此脚本如下,你可以将它保存为一个可执行的 shell 脚本:

  1. #!/bin/bash
  2. cd /tmp
  3. if ! which lynx > /dev/null; then sudo apt-get install lynx -y; fi
  4. if [ "$(getconf LONG_BIT)" == "64" ]; then arch=amd64; else arch=i386; fi
  5. function download() {
  6. wget $(lynx -dump -listonly -dont-wrap-pre $kernelURL | grep "$1" | grep "$2" | grep "$arch" | cut -d ' ' -f 4)
  7. }
  8. # Kernel URL
  9. read -p "Do you want the latest RC?" rc
  10. case "$rc" in
  11. y* | Y*) kernelURL=$(lynx -dump -nonumbers http://kernel.ubuntu.com/~kernel-ppa/mainline/ | tail -1) ;;
  12. n* | N*) kernelURL=$(lynx -dump -nonumbers http://kernel.ubuntu.com/~kernel-ppa/mainline/ | grep -v rc | tail -1) ;;
  13. *) exit ;;
  14. esac
  15. read -p "Do you want the lowlatency kernel?" lowlatency
  16. case "$lowlatency" in
  17. y* | Y*) lowlatency=1 ;;
  18. n* | n*) lowlatency=0 ;;
  19. *) exit ;;
  20. esac
  21. # Download Kernel
  22. if [ "$lowlatency" == "0" ]; then
  23. echo "Downloading the latest generic kernel."
  24. download generic header
  25. download generic image
  26. elif [ "$lowlatency" == "1" ]; then
  27. echo "Downloading the latest lowlatency kernel."
  28. download lowlatency header
  29. download lowlatency image
  30. fi
  31. # Shared Kernel Header
  32. wget $(lynx -dump -listonly -dont-wrap-pre $kernelURL | grep all | cut -d ' ' -f 4)
  33. # Install Kernel
  34. echo "Installing Linux Kernel"
  35. sudo dpkg -i linux*.deb
  36. echo "Done. You may now reboot."

4. 当进入终端后,运行以下命令来赋予脚本执行本次操作的权限。

  1. chmod +x *

最后,每当你想要安装或升级 Ubuntu 的 linux 内核时都可以运行此脚本。

  1. ./*

这里之所以使用 * 替代脚本名称是因为文件夹中只有它一个文件。

如果脚本运行成功,重启电脑即可。

恢复并且卸载新版内核

如果因为某些原因要恢复并且移除新版内核的话,请重启电脑,在 Grub 启动器的 高级选项 菜单下选择旧版内核来启动系统。

当系统启动后,参照下边章节继续执行。

如何移除旧的(或新的)内核:

  1. 从 Ubuntu 软件中心安装 Synaptic Package Manager。
  2. 打开 Synaptic Package Manager 然后如下操作:
  • 点击 Reload 按钮,让想要被删除的新内核显示出来.
  • 在左侧面板中选择 Status -> Installed ,让查找列表更清晰一些。
  • 在 Quick filter 输入框中输入 linux-image- 用于查询。
  • 选择一个内核镜像 “linux-image-x.xx.xx-generic” 然后将其标记为removal(或者Complete Removal)
  • 最后,应用变更

重复以上操作直到移除所有你不需要的内核。注意,不要随意移除此刻正在运行的内核,你可以通过 uname -r 命令来查看运行的内核。

对于 Ubuntu 服务器来说,你可以一步步运行下面的命令:

  1. uname -r
  2. dpkg -l | grep linux-image-
  3. sudo apt-get autoremove KERNEL_IMAGE_NAME


via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/

11 Python Libraries You Might Not Know

Standard
原文:http://blog.yhathq.com/posts/11-python-libraries-you-might-not-know.html

There are tons of Python packages out there. So many that no one man or woman could possibly catch them all. PyPialone has over 47,000 packages listed!

Recently, with so many data scientists making the switch to Python, I couldn’t help but think that while they’re getting some of the great benefits of pandasscikit-learn, and numpy, they’re missing out on some older yet equally helpful Python libraries.

In this post, I’m going to highlight some lesser-known libraries. Even you experienced Pythonistas should take a look, there might be one or two in there you’ve never seen!

1) delorean

Delorean is a really cool date/time library. Apart from having a sweet name, it’s one of the more natural feeling date/time munging libraries I’ve used in Python. It’s sort of like moment in javascript, except I laugh every time I import it. The docs are also good and in addition to being technically helpful, they also make countless Back to the Futurereferences.

from delorean import Delorean
EST = "US/Eastern"
d = Delorean(timezone=EST)

2) prettytable

There’s a chance you haven’t heard of prettytable because it’s listed on GoogleCode, which is basically the coding equivalent of Siberia.

Despite being exiled to a cold, snowy and desolate place, prettytable is great for constructing output that looks good in the terminal or in the browser. So if you’re working on a new plug-in for the IPython Notebook, check out prettytable for your HTML __repr__.

from prettytable import PrettyTable
table = PrettyTable(["animal", "ferocity"])
table.add_row(["wolverine", 100])
table.add_row(["grizzly", 87])
table.add_row(["Rabbit of Caerbannog", 110])
table.add_row(["cat", -1])
table.add_row(["platypus", 23])
table.add_row(["dolphin", 63])
table.add_row(["albatross", 44])
table.sort_key("ferocity")
table.reversesort = True
+----------------------+----------+
|        animal        | ferocity |
+----------------------+----------+
| Rabbit of Caerbannog |   110    |
|      wolverine       |   100    |
|       grizzly        |    87    |
|       dolphin        |    63    |
|      albatross       |    44    |
|       platypus       |    23    |
|         cat          |    -1    |
+----------------------+----------+

3) snowballstemmer

Ok so the first time I installed snowballstemmer, it was because I thought the name was cool. But it’s actually a pretty slick little library. snowballstemmer will stem words in 15 different languages and also comes with a porter stemmer to boot.

from snowballstemmer import EnglishStemmer, SpanishStemmer
EnglishStemmer().stemWord("Gregory")
# Gregori
SpanishStemmer().stemWord("amarillo")
# amarill

4) wget

Remember every time you wrote that web crawler for some specific purpose? Turns out somebody built it…and it’s called wget. Recursively download a website? Grab every image from a page? Sidestep cookie traces? Done, done, and done.

Movie Mark Zuckerberg even says it himself

First up is Kirkland, they keep everything open and allow indexes on their apache configuration, so a little wget magic is enough to download the entire Kirkland facebook. Kid stuff!

The Python version comes with just about every feature you could ask for and is easy to use.

import wget
wget.download("http://www.cnn.com/")
# 100% [............................................................................] 280385 / 280385

Note that another option for linux and osx users would be to use do: from sh import wget. However the Python wget module does have a better argument handline.

5) PyMC

I’m not sure how PyMC gets left out of the mix so often. scikit-learn seems to be everyone’s darling (as it should, it’s fantastic), but in my opinion, not enough love is given to PyMC.

from pymc.examples import disaster_model
from pymc import MCMC
M = MCMC(disaster_model)
M.sample(iter=10000, burn=1000, thin=10)
[-----------------100%-----------------] 10000 of 10000 complete in 1.4 sec

If you don’t already know it, PyMC is a library for doing Bayesian analysis. It’s featured heavily in Cam Davidson-Pilon’s Bayesian Methods for Hackers and has made cameos on a lot of popular data science/python blogs, but has never received the cult following akin to scikit-learn.

6) sh

I can’t risk you leaving this page and not knowing about shsh lets you import shell commands into Python as functions. It’s super useful for doing things that are easy in bash but you can’t remember how to do in Python (i.e. recursively searching for files).

from sh import find
find("/tmp")
/tmp/foo
/tmp/foo/file1.json
/tmp/foo/file2.json
/tmp/foo/file3.json
/tmp/foo/bar/file3.json

7) fuzzywuzzy

Ranking in the top 10 of simplest libraries I’ve ever used (if you have 2-3 minutes, you can read through the source), fuzzywuzzy is a fuzzy string matching library built by the fine people at SeatGeek.

fuzzywuzzy implements things like string comparison ratios, token ratios, and plenty of other matching metrics. It’s great for creating feature vectors or matching up records in different databases.

from fuzzywuzzy import fuzz
fuzz.ratio("Hit me with your best shot", "Hit me with your pet shark")
# 85

8) progressbar

You know those scripts you have where you do a print "still going..." in that giant mess of a for loop you call your __main__? Yeah well instead of doing that, why don’t you step up your game and start using progressbar?

progressbar does pretty much exactly what you think it does…makes progress bars. And while this isn’t exactly a data science specific activity, it does put a nice touch on those extra long running scripts.

Alas, as another GoogleCode outcast, it’s not getting much love (the docs have 2 spaces for indents…2!!!). Do what’s right and give it a good ole pip install.

from progressbar import ProgressBar
import time
pbar = ProgressBar(maxval=10)
for i in range(1, 11):
    pbar.update(i)
    time.sleep(1)
pbar.finish()
# 60% |########################################################                                      |

9) colorama

So while you’re making your logs have nice progress bars, why not also make them colorful! It can actually be helpful for reminding yourself when things are going horribly wrong.

colorama is super easy to use. Just pop it into your scripts and add any text you want to print to a color:

10) uuid

I’m of the mind that there are really only a few tools one needs in programming: hashing, key/value stores, and universally unique ids. uuid is the built in Python UUID library. It implements versions 1, 3, 4, and 5 of the UUID standards and is really handy for doing things like…err…ensuring uniqueness.

That might sound silly, but how many times have you had records for a marketing campaign, or an e-mail drop and you want to make sure everyone gets their own promo code or id number?

And if you’re worried about running out of ids, then fear not! The number of UUIDs you can generate is comparable to the number of atoms in the universe.

import uuid
print uuid.uuid4()
# e7bafa3d-274e-4b0a-b9cc-d898957b4b61

Well if you were a uuid you probably would be.

11) bashplotlib

Shameless self-promotion here, bashplotlib is one of my creations. It lets you plot histograms and scatterplots using stdin. So while you might not find it replacing ggplot or matplotlib as your everyday plotting library, the novelty value is quite high. At the very least, use it as a way to spruce up your logs a bit.

$ pip install bashplotlib
$ scatter --file data/texas.txt --pch x

在SQL Server中使用存储过程发送电​​子邮件

Standard

 

简介

这是一个很有意思的讨论话题。现在我们习惯把邮件集成到每一个应用程序中。我们使用SMTP设置在.NET的Web.Config中整合电子邮件,使用Send方法来发送邮件。最近,我遇到了一个有趣的挑战,即如何从SQL Server发送电子邮件。假设我们不得不跟踪成功的有计划的SQL查询执行。我们不能为了检查它是否成功而每次去修改table。如果我们能得到某种形式的通知,来帮助我们知道执行的状态,那就好了。是的,利用预定义的几个存储过程从SQL Server发送邮件,这是可能的。

一起来学学吧。

开始

我们的目的是使用预定义的存储过程来发送邮件。首先,我们需要建立一个账户——这是服务器发送邮件所需的认证信息。一般邮件是通过SMTP(Simple Mail Transfer Protocol)发送的。这些设置将取决于服务器应用程序的需求。请记住配置必须是有效的。

创建一个数据库帐号:

EXEC msdb.dbo.sysmail_add_account_sp
    @account_name = 'SendEmailSqlDemoAccount'
  , @description = 'Sending SMTP mails to users'
  , @email_address = 'suraj.0241@gmail.com'
  , @display_name = 'Suraj Sahoo'
  , @replyto_address = 'suraj.0241@gmail.com'
  , @mailserver_name = 'smtp.gmail.com'
  , @port = 587
  , @username = 'XXXXXX'
  , @password = 'XXXXXX'
Go

请使用正确的认证信息和服务器设置,以便成功地发送邮件,否则邮件就会发送失败,被阻塞在发送队列中。

下一步是创建将用于设置数据库邮件的profile(配置文件)。请看下面:

EXEC msdb.dbo.sysmail_add_profile_sp
    @profile_name = 'SendEmailSqlDemoProfile'
  , @description = 'Mail Profile description'
Go

Profile用于设置邮件配置和邮件发送。

下一步骤是将帐户映射到profile。这是让profile知道,它需要用哪个帐户的认证信息来确保发送成功。

-- 添加帐户到配置文件
EXEC msdb.dbo.sysmail_add_profileaccount_sp
    @profile_name = 'SendEmailSqlDemo'
  , @account_name = 'SendEmailSql'
  , @sequence_number = 1
GO

这样,我们就能成功发送电子邮件了。邮件发送查找片段如下所示:

EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'SendEmailSqlDemo2'
  , @recipients = 'suraj.0241@gmail.com'
  , @subject = 'Automated Test Results (Successful)'
  , @body = 'The stored procedure finished successfully.'
  , @importance ='HIGH' 
GO

有时候使用存储过程,并不能得到执行。因此,可以尝试catch块,以及Begin和End处理在一些存储过程中是强制性的。

举个例子,假设我们有一个使用存储过程的SELECT INSERT查询,那么会发生的事情是,我们需要从4个table中选择并插入,这4个table即Users | UserLogin | UserEmployment | Departments

对于每一个新屏幕的创建,我们要操纵和选择用户,根据外键,再次插入到具有不同外键的相同table中,代表特定的屏幕。查询如下:

BEGIN TRY
  BEGIN TRAN
 INSERT INTO
   dbo.[User]
 SELECT
    us.UserName,
	us.UserAddress,
	us.UserPhone,
    @fkScreenID
 FROM
   dbo.[User] as us
 WHERE
   UserID= @userID
 COMMIT TRAN
    END TRY
   BEGIN CATCH
  ROLLBACK TRAN
  END
  END CATCH  //其他table的代码与此类似。添加Try Catch到整个SP执行块(Executing Block)会更好

这里的事件要是失败的话,会转移到Catch块,在Catch块中我们可以让电子邮件一直发送程序以获取相关成功或失败的通知和原因,以及告知哪里失败。这对开发人员非常有帮助。

故障排除邮件

还有一些存储过程能让我们知道邮件是成功的,失败的还是尚在排队中。这真是一个超棒的功能。

要检查邮件是否已经成功发送和发布,我们可以运行以下查询:

select * from msdb.dbo.sysmail_sentitems

它返回的一些列

Email1

在第二个图片中你可以看到,sent_status属性为sent,这表明邮件已成功发送。

为检查可能无法发送的未发送邮件,我们运行以下查询:

select * from msdb.dbo.sysmail_unsentitems

为检查甚至不能重新从队列中发送的失败邮件,我们运行下面的查询: –

select * from msdb.dbo.sysmail_faileditems

有关故障及原因的详细信息,故障查找查询将如下所示:

SELECT items.subject,
    items.last_mod_date
    ,l.description FROM msdb.dbo.sysmail_faileditems as items
INNER JOIN msdb.dbo.sysmail_event_log AS l
    ON items.mailitem_id = l.mailitem_id
GO

结果类似于:

Email3

上面的错误描述为“No Such Host”错误。该错误通常发生在有一些SMTP服务器连接设置错了的时候。我们需要靠自己排除故障——重新检查设置认证信息,然后再试试。如果依然不能工作,那么就需要检查DNS服务器设置,再次重试配置。

结论

这一次我们讨论了如何使用存储过程从我们自己的SQL发送邮件的过程,并证明是可行的。故障排除错误和设置也都很简单。

异常和错误是开发中不可避免的一部分,但处理这些问题却是开发人员的使命挑战。

译文链接:http://www.codeceo.com/article/sql-server-send-mail.html
英文原文:Sending Email Using Stored Procedures in Sql Server
翻译作者:码农网 – 小峰