42 Commits

Author SHA1 Message Date
55a74f7d98 Merge branch 'use-datetime' into merge-datetime-into-disqus
* use-datetime:
  extract date and time from email threads pages
  add groups and tags to schema; update how we determine timestamps; handle exceptions when we add the document to the writer, rather than elsewhere
  move where exception is caught (exception was also incorrect.)
  switched created_time, modified_time, indexed_time over to DATETIME. added DateParserPlugin to query QueryParser. added time fields to those being searched by default. tests do not seem to be working.
2018-08-24 01:13:42 -07:00
ab76226b0c Merge pull request #90 from dcppc/add-dates-and-subgroups-to-emails
Add dates and subgroups to emails
2018-08-24 00:07:40 -07:00
a4ebef6e6f extract date and time from email threads pages 2018-08-24 00:04:35 -07:00
bad50efa9b add groups and tags to schema; update how we determine timestamps; handle exceptions when we add the document to the writer, rather than elsewhere 2018-08-24 00:03:23 -07:00
629fc063db move where exception is caught (exception was also incorrect.) 2018-08-24 00:01:26 -07:00
3b0baa21de switched created_time, modified_time, indexed_time over to DATETIME. added DateParserPlugin to query QueryParser. added time fields to those being searched by default. tests do not seem to be working. 2018-08-23 19:01:40 -07:00
33b8857bd0 implement stop filter; implement query variations in main query parser 2018-08-23 17:15:48 -07:00
7c50fc9ff1 swap out data-commons with nihdatacommons in disqus urls 2018-08-23 17:15:10 -07:00
eb2cdf1437 fix a bug 2018-08-23 15:57:25 -07:00
c67e864581 add disqus threads to things being indexed by centillion 2018-08-23 15:55:59 -07:00
25cc12cf21 turn disqus_util into a crawler object 2018-08-23 15:55:21 -07:00
11c1185e62 clarify api call in disqus.md 2018-08-23 15:54:30 -07:00
74cfaf8275 add more notes on hypothesis API output 2018-08-22 15:23:28 -07:00
552caad135 add utilities to call disqus and hypothesis APIs
- both of these files are functions and are not integrated into centillion
2018-08-22 15:22:25 -07:00
19c42df978 update hypothesis/disqus notes 2018-08-22 12:45:51 -07:00
6f30e3f120 add api output from listThreads endpoint 2018-08-21 19:36:36 -07:00
ad6b653e27 add all the threads 2018-08-21 15:10:40 -07:00
6bfadef829 Merge pull request #73 from dcppc/feedback-floater
Add a feedback mechanism
2018-08-21 11:33:34 -07:00
c38683ae9f (resolve conflict) Merge branch 'dcppc' into feedback-floater
* dcppc:
  add centillion config back. no sensitive info.
  add option to set port at runtime with CENTILLION_PORT environment variable
  add a bit o whitespace
2018-08-21 11:32:59 -07:00
3f5349a5a6 Merge pull request #80 from dcppc/add-centillion-config-back
add centillion config back. no sensitive info.
2018-08-21 11:16:21 -07:00
f88cf6ecad add centillion config back. no sensitive info. 2018-08-21 11:15:29 -07:00
ec54292a4b Merge pull request #79 from dcppc/add-port-env-var
add option to set port at runtime
2018-08-21 11:12:17 -07:00
296132d356 add option to set port at runtime with CENTILLION_PORT environment variable 2018-08-21 11:09:46 -07:00
0bc40ba323 Merge pull request #76 from dcppc/add-whitespace
add a bit o whitespace
2018-08-21 10:33:20 -07:00
8143e214c2 add a bit o whitespace 2018-08-21 10:06:16 -07:00
b015da2e9b add dismissable "thanks for your feedback" message to top 2018-08-20 20:42:58 -07:00
9c6b57ba85 improve message formatting 2018-08-20 15:04:21 -07:00
a080eebc29 add dumy function as placeholder for where we add info messages 2018-08-20 15:04:03 -07:00
323d7ce8ca return better messages 2018-08-20 15:03:21 -07:00
da62a5c887 add successful post call and export to JSON db 2018-08-20 14:10:20 -07:00
2714ad3e0c update todo 2018-08-20 14:09:58 -07:00
5e1388e8a8 move modal into its own .html file 2018-08-20 10:34:20 -07:00
f40cccac99 update todo with tasks 2018-08-20 10:33:51 -07:00
c72fc44ea7 fix button and smiley styles 2018-08-20 10:33:36 -07:00
cf417917c9 add /feedback post route 2018-08-20 10:29:40 -07:00
8aaad93e68 Merge remote-tracking branch 'origin/dcppc' into feedback-floater
* origin/dcppc:
  remove copper requirement for /list endpoint
2018-08-19 21:05:22 -07:00
bf8d99c732 Merge pull request #71 from dcppc/hotfix/list-endpoint
remove copper requirement for /list endpoint
2018-08-19 20:49:12 -07:00
20c55891f3 remove copper requirement for /list endpoint 2018-08-19 20:44:06 -07:00
685058545a feedback button successfully triggers a modal 2018-08-19 01:37:55 -07:00
23fd17132e add page self-identifiers. add "send feedback" button. fix layouts. 2018-08-19 01:16:13 -07:00
d3ba1f11a7 Merge pull request #63 from dcppc/hotfix/search-box
fix "Search Metadata" label so search does not break
2018-08-17 08:35:16 -07:00
36e89fbc65 fix "Search Metadata" label so search does not break 2018-08-17 08:31:15 -07:00
26 changed files with 5773 additions and 418 deletions

2
.gitignore vendored
View File

@@ -1,4 +1,4 @@
config_centillion.py
feedback_database.json
config_flask.py
vp
credentials.json

4268
Disqus.md Normal file

File diff suppressed because it is too large Load Diff

249
Hypothesis.md Normal file
View File

@@ -0,0 +1,249 @@
# Hypothesis API
## Authenticating
Example output call for authenticating with the API:
```
{
"links": {
"profile": {
"read": {
"url": "https://hypothes.is/api/profile",
"method": "GET",
"desc": "Fetch the user's profile"
},
"update": {
"url": "https://hypothes.is/api/profile",
"method": "PATCH",
"desc": "Update a user's preferences"
}
},
"search": {
"url": "https://hypothes.is/api/search",
"method": "GET",
"desc": "Search for annotations"
},
"group": {
"member": {
"add": {
"url": "https://hypothes.is/api/groups/:pubid/members/:userid",
"method": "POST",
"desc": "Add the user in the request params to a group."
},
"delete": {
"url": "https://hypothes.is/api/groups/:pubid/members/:userid",
"method": "DELETE",
"desc": "Remove the current user from a group."
}
}
},
"links": {
"url": "https://hypothes.is/api/links",
"method": "GET",
"desc": "URL templates for generating URLs for HTML pages"
},
"groups": {
"read": {
"url": "https://hypothes.is/api/groups",
"method": "GET",
"desc": "Fetch the user's groups"
}
},
"annotation": {
"hide": {
"url": "https://hypothes.is/api/annotations/:id/hide",
"method": "PUT",
"desc": "Hide an annotation as a group moderator."
},
"unhide": {
"url": "https://hypothes.is/api/annotations/:id/hide",
"method": "DELETE",
"desc": "Unhide an annotation as a group moderator."
},
"read": {
"url": "https://hypothes.is/api/annotations/:id",
"method": "GET",
"desc": "Fetch an annotation"
},
"create": {
"url": "https://hypothes.is/api/annotations",
"method": "POST",
"desc": "Create an annotation"
},
"update": {
"url": "https://hypothes.is/api/annotations/:id",
"method": "PATCH",
"desc": "Update an annotation"
},
"flag": {
"url": "https://hypothes.is/api/annotations/:id/flag",
"method": "PUT",
"desc": "Flag an annotation for review."
},
"delete": {
"url": "https://hypothes.is/api/annotations/:id",
"method": "DELETE",
"desc": "Delete an annotation"
}
}
}
}
```
## Listing
Here is the result of the API call to list an annotation
given its annotation ID:
```
{
"updated": "2018-07-26T10:20:47.803636+00:00",
"group": "__world__",
"target": [
{
"source": "https://h.readthedocs.io/en/latest/api/authorization/",
"selector": [
{
"conformsTo": "https://tools.ietf.org/html/rfc3236",
"type": "FragmentSelector",
"value": "access-tokens"
},
{
"endContainer": "/div[1]/section[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[2]/p[2]",
"startContainer": "/div[1]/section[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[2]/p[1]",
"type": "RangeSelector",
"startOffset": 14,
"endOffset": 116
},
{
"type": "TextPositionSelector",
"end": 2234,
"start": 1374
},
{
"exact": "hich read or write data as a specific user need to be authorized\nwith an access token. Access tokens can be obtained in two ways:\n\nBy generating a personal API token on the Hypothesis developer\npage (you must be logged in to\nHypothesis to get to this page). This is the simplest method, however\nthese tokens are only suitable for enabling your application to make\nrequests as a single specific user.\n\nBy registering an \u201cOAuth client\u201d and\nimplementing the OAuth authentication flow\nin your application. This method allows any user to authorize your\napplication to read and write data via the API as that user. The Hypothesis\nclient is an example of an application that uses OAuth.\nSee Using OAuth for details of how to implement this method.\n\n\nOnce an access token has been obtained, requests can be authorized by putting\nthe token in the Authorization header.",
"prefix": "\n\n\nAccess tokens\u00b6\nAPI requests w",
"type": "TextQuoteSelector",
"suffix": "\nExample request:\nGET /api HTTP/"
}
]
}
],
"links": {
"json": "https://hypothes.is/api/annotations/kEaohJC9Eeiy_UOozkpkyA",
"html": "https://hypothes.is/a/kEaohJC9Eeiy_UOozkpkyA",
"incontext": "https://hyp.is/kEaohJC9Eeiy_UOozkpkyA/h.readthedocs.io/en/latest/api/authorization/"
},
"tags": [],
"text": "sdfsdf",
"created": "2018-07-26T10:20:47.803636+00:00",
"uri": "https://h.readthedocs.io/en/latest/api/authorization/",
"flagged": false,
"user_info": {
"display_name": null
},
"user": "acct:Aravindan@hypothes.is",
"hidden": false,
"document": {
"title": [
"Authorization \u2014 h 0.0.2 documentation"
]
},
"id": "kEaohJC9Eeiy_UOozkpkyA",
"permissions": {
"read": [
"group:__world__"
],
"admin": [
"acct:Aravindan@hypothes.is"
],
"update": [
"acct:Aravindan@hypothes.is"
],
"delete": [
"acct:Aravindan@hypothes.is"
]
}
}
```
## Searching
Here is the output from a call to the endpoint to search annotations
(we pass a specific URL to the search function):
```
{
"rows": [
{
"updated": "2018-08-10T02:21:46.898833+00:00",
"group": "__world__",
"target": [
{
"source": "http://pilot.data-commons.us/organize/CopperInternalDeliveryWorkFlow/",
"selector": [
{
"endContainer": "/div[1]/main[1]/div[1]/div[3]/article[1]/h2[1]",
"startContainer": "/div[1]/main[1]/div[1]/div[3]/article[1]/h2[1]",
"type": "RangeSelector",
"startOffset": 0,
"endOffset": 80
},
{
"type": "TextPositionSelector",
"end": 12328,
"start": 12248
},
{
"exact": "Deliverables are due internally on the first of each month, which here is Day 1,",
"prefix": " \n ",
"type": "TextQuoteSelector",
"suffix": "\u00b6\nDay -30 through -10\nCopper PM "
}
]
}
],
"links": {
"json": "https://hypothes.is/api/annotations/IY2W_pxEEeiVuxfD3sehjQ",
"html": "https://hypothes.is/a/IY2W_pxEEeiVuxfD3sehjQ",
"incontext": "https://hyp.is/IY2W_pxEEeiVuxfD3sehjQ/pilot.data-commons.us/organize/CopperInternalDeliveryWorkFlow/"
},
"tags": [],
"text": "This is a sample annotation",
"created": "2018-08-10T02:21:46.898833+00:00",
"uri": "http://pilot.data-commons.us/organize/CopperInternalDeliveryWorkFlow/",
"flagged": false,
"user_info": {
"display_name": null
},
"user": "acct:charlesreid1dib@hypothes.is",
"hidden": false,
"document": {
"title": [
"Copper Internal Delivery Workflow - Data Commons Internal Site"
]
},
"id": "IY2W_pxEEeiVuxfD3sehjQ",
"permissions": {
"read": [
"group:__world__"
],
"admin": [
"acct:charlesreid1dib@hypothes.is"
],
"update": [
"acct:charlesreid1dib@hypothes.is"
],
"delete": [
"acct:charlesreid1dib@hypothes.is"
]
}
}
],
"total": 1
}
```

View File

@@ -12,8 +12,13 @@ one centillion is 3.03 log-times better than a googol.
## What Is It
Centillion (https://github.com/dcppc/centillion) is a search engine that can index
three kinds of collections: Google Documents (.docx files), Github issues, and Markdown files in
Github repos.
different kinds of document collections: Google Documents (.docx files), Google Drive files,
Github issues, Github files, Github Markdown files, and Groups.io email threads.
## What Is It
We define the types of documents the centillion should index,
what info and how. The centillion then builds and

56
Todo.md
View File

@@ -1,47 +1,29 @@
# todo
Main task:
- hashing and caching
- <s>first, working out the logic of how we group items into sets
- needs to be deleted
- needs to be updated
- needs to be added
- for docs, issues, and comments</s>
- second, when we add or update an item, need to:
- go through the motions, download file, extract text
- check for existing indexed doc with that id
- check if existing indexed doc has same hash
- if so, skip
- otherwise, delete and re-index
ux improvements:
- feedback tools
- integrating master list into single list
- providing advanced search interfce
Other bugs:
- Some github issues have no title (?)
- <s>Need to combine issues with comments</s>
- Not able to index markdown files _in a repo_
- (Longer term) update main index vs update diff index
Needs:
- <s>control panel</s>
Thursday product:
- Everything re-indexed nightly
- Search engine built on all documents in Google Drive, all issues, markdown files
- Using pandoc to extract Google Drive document contents
- BRIEF quickstart documentation
Future:
- Future plans to improve - plugins, improving matching
- Subdomain plans
- Folksonomy tagging and integration plans
big picture improvements:
- hypothesis API
- folksonomy tagging with hypothesis
- tags, expanded schema
config options for plugins
conditional blocks with import github inside
complicated tho - better to have components split off
feedback form: where we are at
- feedback button
- button triggers modal form
- modal has emojis for feedback, text box, buttons
- clicking emojis changes color, to select
- clicking submit with filled out form submits to an endpoint
- clicking submit also closes form, but only if submit successful
feedback form: what we need to do
- fix alerts - thank you for your feedback doesn't show up until a refresh
- probably an easy ajax fix

View File

@@ -3,6 +3,7 @@ import subprocess
import codecs
import os, json
from datetime import datetime
from werkzeug.contrib.fixers import ProxyFix
from flask import Flask, request, redirect, url_for, render_template, flash, jsonify
@@ -39,6 +40,7 @@ class UpdateIndexTask(object):
'groupsio_username' : app_config['GROUPSIO_USERNAME'],
'groupsio_password' : app_config['GROUPSIO_PASSWORD']
}
self.disqus_token = app_config['DISQUS_TOKEN']
thread.daemon = True
thread.start()
@@ -53,6 +55,7 @@ class UpdateIndexTask(object):
search.update_index(self.groupsio_credentials,
self.gh_token,
self.disqus_token,
self.run_which,
config)
@@ -262,13 +265,57 @@ def list_docs(doctype):
all_orgs = resp.json()
for org in all_orgs:
if org['login']=='dcppc':
copper_team_id = '2700235'
mresp = github.get('/teams/%s/members/%s'%(copper_team_id,username))
if mresp.status_code==204:
# Business as usual
search = Search(app.config["INDEX_DIR"])
return jsonify(search.get_list(doctype))
# Business as usual
search = Search(app.config["INDEX_DIR"])
return jsonify(search.get_list(doctype))
# nope
return render_template('403.html')
@app.route('/feedback', methods=['POST'])
def parse_request():
if not github.authorized:
return redirect(url_for("github.login"))
username = github.get("/user").json()['login']
resp = github.get("/user/orgs")
if resp.ok:
all_orgs = resp.json()
for org in all_orgs:
if org['login']=='dcppc':
try:
# Business as usual
data = request.form.to_dict();
data['github_login'] = username
data['timestamp'] = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
feedback_database = 'feedback_database.json'
if not os.path.isfile(feedback_database):
with open(feedback_database,'w') as f:
json_data = [data]
json.dump(json_data, f, indent=4)
else:
json_data = []
with open(feedback_database,'r') as f:
json_data = json.load(f)
json_data.append(data)
with open(feedback_database,'w') as f:
json.dump(json_data, f, indent=4)
## Should be done with Javascript
#flash("Thank you for your feedback!")
return jsonify({'status':'ok','message':'Thank you for your feedback!'})
except:
return jsonify({'status':'error','message':'An error was encountered while submitting your feedback. Try submitting an issue in the <a href="https://github.com/dcppc/centillion/issues/new">dcppc/centillion</a> repository.'})
# nope
return render_template('403.html')
@app.errorhandler(404)
@@ -297,5 +344,10 @@ def store_search(query, fields):
if __name__ == '__main__':
# if running local instance, set to true
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'true'
app.run(host="0.0.0.0",port=5000)
port = os.environ.get('CENTILLION_PORT','')
if port=='':
port = 5000
else:
port = int(port)
app.run(host="0.0.0.0", port=port)

View File

@@ -6,6 +6,8 @@ import base64
from gdrive_util import GDrive
from groupsio_util import GroupsIOArchivesCrawler, GroupsIOException
from disqus_util import DisqusCrawler
from apiclient.http import MediaIoBaseDownload
import mistune
@@ -19,8 +21,11 @@ import codecs
from datetime import datetime
import dateutil.parser
from whoosh import query
from whoosh.qparser import MultifieldParser, QueryParser
from whoosh.analysis import StemmingAnalyzer
from whoosh.analysis import StemmingAnalyzer, LowercaseFilter, StopFilter
from whoosh.qparser.dateparse import DateParserPlugin
from whoosh import fields, index
"""
@@ -103,10 +108,21 @@ class Search:
# ------------------------------
# Update the entire index
def update_index(self, groupsio_credentials, gh_token, run_which, config):
def update_index(self, groupsio_credentials, gh_token, disqus_token, run_which, config):
"""
Update the entire search index
"""
if run_which=='all' or run_which=='disqus':
try:
self.update_index_disqus(disqus_token, config)
except Exception as e:
print("ERROR: While re-indexing: failed to update Disqus comment threads")
print("-"*40)
print(repr(e))
print("-"*40)
print("Continuing...")
pass
if run_which=='all' or run_which=='emailthreads':
try:
self.update_index_emailthreads(groupsio_credentials, config)
@@ -172,7 +188,8 @@ class Search:
os.mkdir(index_folder)
exists = index.exists_in(index_folder)
stemming_analyzer = StemmingAnalyzer()
#stemming_analyzer = StemmingAnalyzer()
stemming_analyzer = StemmingAnalyzer() | LowercaseFilter() | StopFilter()
# ------------------------------
@@ -180,30 +197,38 @@ class Search:
# is defined.
schema = Schema(
id = ID(stored=True, unique=True),
kind = ID(stored=True),
id = fields.ID(stored=True, unique=True),
kind = fields.ID(stored=True),
created_time = ID(stored=True),
modified_time = ID(stored=True),
indexed_time = ID(stored=True),
created_time = fields.DATETIME(stored=True),
modified_time = fields.DATETIME(stored=True),
indexed_time = fields.DATETIME(stored=True),
title = TEXT(stored=True, field_boost=100.0),
url = ID(stored=True, unique=True),
mimetype=ID(stored=True),
owner_email=ID(stored=True),
owner_name=TEXT(stored=True),
repo_name=TEXT(stored=True),
repo_url=ID(stored=True),
title = fields.TEXT(stored=True, field_boost=100.0),
github_user=TEXT(stored=True),
url = fields.ID(stored=True),
mimetype = fields.TEXT(stored=True),
owner_email = fields.ID(stored=True),
owner_name = fields.TEXT(stored=True),
# mainly for email threads, groups.io, hypothesis
group = fields.ID(stored=True),
repo_name = fields.TEXT(stored=True),
repo_url = fields.ID(stored=True),
github_user = fields.TEXT(stored=True),
tags = fields.KEYWORD(commas=True,
stored=True,
lowercase=True),
# comments only
issue_title=TEXT(stored=True, field_boost=100.0),
issue_url=ID(stored=True),
issue_title = fields.TEXT(stored=True, field_boost=100.0),
issue_url = fields.ID(stored=True),
content=TEXT(stored=True, analyzer=stemming_analyzer)
content = fields.TEXT(stored=True, analyzer=stemming_analyzer)
)
@@ -243,24 +268,32 @@ class Search:
writer.delete_by_term('id',item['id'])
# Index a plain google drive file
writer.add_document(
id = item['id'],
kind = 'gdoc',
created_time = item['createdTime'],
modified_time = item['modifiedTime'],
indexed_time = datetime.now().replace(microsecond=0).isoformat(),
title = item['name'],
url = item['webViewLink'],
mimetype = mimetype,
owner_email = item['owners'][0]['emailAddress'],
owner_name = item['owners'][0]['displayName'],
repo_name='',
repo_url='',
github_user='',
issue_title='',
issue_url='',
content = content
)
created_time = dateutil.parser.parse(item['createdTime'])
modified_time = dateutil.parser.parse(item['modifiedTime'])
indexed_time = datetime.now().replace(microsecond=0)
try:
writer.add_document(
id = item['id'],
kind = 'gdoc',
created_time = created_time,
modified_time = modified_time,
indexed_time = indexed_time,
title = item['name'],
url = item['webViewLink'],
mimetype = mimetype,
owner_email = item['owners'][0]['emailAddress'],
owner_name = item['owners'][0]['displayName'],
group='',
repo_name='',
repo_url='',
github_user='',
issue_title='',
issue_url='',
content = content
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Google Drive file \"%s\""%(item['name']))
else:
@@ -314,7 +347,7 @@ class Search:
)
assert output == ""
except RuntimeError:
print(" > XXXXXX Failed to index document \"%s\""%(item['name']))
print(" > XXXXXX Failed to index Google Drive document \"%s\""%(item['name']))
# If export was successful, read contents of markdown
@@ -342,24 +375,33 @@ class Search:
else:
print(" > Creating a new record")
writer.add_document(
id = item['id'],
kind = 'gdoc',
created_time = item['createdTime'],
modified_time = item['modifiedTime'],
indexed_time = datetime.now().replace(microsecond=0).isoformat(),
title = item['name'],
url = item['webViewLink'],
mimetype = mimetype,
owner_email = item['owners'][0]['emailAddress'],
owner_name = item['owners'][0]['displayName'],
repo_name='',
repo_url='',
github_user='',
issue_title='',
issue_url='',
content = content
)
try:
created_time = dateutil.parser.parse(item['createdTime'])
modified_time = dateutil.parser.parse(item['modifiedTime'])
indexed_time = datetime.now()
writer.add_document(
id = item['id'],
kind = 'gdoc',
created_time = created_time,
modified_time = modified_time,
indexed_time = indexed_time,
title = item['name'],
url = item['webViewLink'],
mimetype = mimetype,
owner_email = item['owners'][0]['emailAddress'],
owner_name = item['owners'][0]['displayName'],
group='',
repo_name='',
repo_url='',
github_user='',
issue_title='',
issue_url='',
content = content
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Google Drive file \"%s\""%(item['name']))
@@ -393,31 +435,36 @@ class Search:
issue_comment_content += comment.body.rstrip()
issue_comment_content += "\n"
# Now create the actual search index record
created_time = clean_timestamp(issue.created_at)
modified_time = clean_timestamp(issue.updated_at)
indexed_time = clean_timestamp(datetime.now())
# Now create the actual search index record.
# Add one document per issue thread,
# containing entire text of thread.
writer.add_document(
id = issue.html_url,
kind = 'issue',
created_time = created_time,
modified_time = modified_time,
indexed_time = indexed_time,
title = issue.title,
url = issue.html_url,
mimetype='',
owner_email='',
owner_name='',
repo_name = repo_name,
repo_url = repo_url,
github_user = issue.user.login,
issue_title = issue.title,
issue_url = issue.html_url,
content = issue_comment_content
)
created_time = issue.created_at
modified_time = issue.updated_at
indexed_time = datetime.now()
try:
writer.add_document(
id = issue.html_url,
kind = 'issue',
created_time = created_time,
modified_time = modified_time,
indexed_time = indexed_time,
title = issue.title,
url = issue.html_url,
mimetype='',
owner_email='',
owner_name='',
group='',
repo_name = repo_name,
repo_url = repo_url,
github_user = issue.user.login,
issue_title = issue.title,
issue_url = issue.html_url,
content = issue_comment_content
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Github issue \"%s\""%(issue.title))
@@ -447,7 +494,8 @@ class Search:
print(" > XXXXXXXX Failed to find file info.")
return
indexed_time = clean_timestamp(datetime.now())
indexed_time = datetime.now()
if fext in MARKDOWN_EXTS:
print("Indexing markdown doc %s from repo %s"%(fname,repo_name))
@@ -476,24 +524,31 @@ class Search:
usable_url = "https://github.com/%s/blob/master/%s"%(repo_name, fpath)
# Now create the actual search index record
writer.add_document(
id = fsha,
kind = 'markdown',
created_time = '',
modified_time = '',
indexed_time = indexed_time,
title = fname,
url = usable_url,
mimetype='',
owner_email='',
owner_name='',
repo_name = repo_name,
repo_url = repo_url,
github_user = '',
issue_title = '',
issue_url = '',
content = content
)
try:
writer.add_document(
id = fsha,
kind = 'markdown',
created_time = None,
modified_time = None,
indexed_time = indexed_time,
title = fname,
url = usable_url,
mimetype='',
owner_email='',
owner_name='',
group='',
repo_name = repo_name,
repo_url = repo_url,
github_user = '',
issue_title = '',
issue_url = '',
content = content
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Github markdown file \"%s\""%(fname))
else:
print("Indexing github file %s from repo %s"%(fname,repo_name))
@@ -501,24 +556,29 @@ class Search:
key = fname+"_"+fsha
# Now create the actual search index record
writer.add_document(
id = key,
kind = 'ghfile',
created_time = '',
modified_time = '',
indexed_time = indexed_time,
title = fname,
url = repo_url,
mimetype='',
owner_email='',
owner_name='',
repo_name = repo_name,
repo_url = repo_url,
github_user = '',
issue_title = '',
issue_url = '',
content = ''
)
try:
writer.add_document(
id = key,
kind = 'ghfile',
created_time = None,
modified_time = None,
indexed_time = indexed_time,
title = fname,
url = repo_url,
mimetype='',
owner_email='',
owner_name='',
group='',
repo_name = repo_name,
repo_url = repo_url,
github_user = '',
issue_title = '',
issue_url = '',
content = ''
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Github file \"%s\""%(fname))
@@ -529,30 +589,84 @@ class Search:
def add_emailthread(self, writer, d, config, update=True):
"""
Use a Github file API record to add a filename
to the search index.
Use a Groups.io email thread record to add
an email thread to the search index.
"""
indexed_time = clean_timestamp(datetime.now())
if 'created_time' in d.keys() and d['created_time'] is not None:
created_time = d['created_time']
else:
created_time = None
if 'modified_time' in d.keys() and d['modified_time'] is not None:
modified_time = d['modified_time']
else:
modified_time = None
indexed_time = datetime.now()
# Now create the actual search index record
writer.add_document(
id = d['permalink'],
kind = 'emailthread',
created_time = '',
modified_time = '',
indexed_time = indexed_time,
title = d['subject'],
url = d['permalink'],
mimetype='',
owner_email='',
owner_name=d['original_sender'],
repo_name = '',
repo_url = '',
github_user = '',
issue_title = '',
issue_url = '',
content = d['content']
)
try:
writer.add_document(
id = d['permalink'],
kind = 'emailthread',
created_time = created_time,
modified_time = modified_time,
indexed_time = indexed_time,
title = d['subject'],
url = d['permalink'],
mimetype='',
owner_email='',
owner_name=d['original_sender'],
group=d['subgroup'],
repo_name = '',
repo_url = '',
github_user = '',
issue_title = '',
issue_url = '',
content = d['content']
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Groups.io thread \"%s\""%(d['subject']))
# ------------------------------
# Add a single disqus comment thread
# to the search index.
def add_disqusthread(self, writer, d, config, update=True):
"""
Use a disqus comment thread record
to add a disqus comment thread to the
search index.
"""
indexed_time = datetime.now()
# created_time is already a timestamp
# Now create the actual search index record
try:
writer.add_document(
id = d['id'],
kind = 'disqus',
created_time = d['created_time'],
modified_time = None,
indexed_time = indexed_time,
title = d['title'],
url = d['link'],
mimetype='',
owner_email='',
owner_name='',
repo_name = '',
repo_url = '',
github_user = '',
issue_title = '',
issue_url = '',
content = d['content']
)
except ValueError as e:
print(repr(e))
print(" > XXXXXX Failed to index Disqus comment thread \"%s\""%(d['title']))
@@ -580,9 +694,8 @@ class Search:
# Updated algorithm:
# - get set of indexed ids
# - get set of remote ids
# - drop indexed ids not in remote ids
# - drop all indexed ids
# - index all remote ids
# - add hash check in add_
# Get the set of indexed ids:
@@ -631,10 +744,10 @@ class Search:
full_items[f['id']] = f
## Shorter:
#break
# Longer:
if nextPageToken is None:
break
break
## Longer:
#if nextPageToken is None:
# break
writer = self.ix.writer()
@@ -642,34 +755,41 @@ class Search:
temp_dir = tempfile.mkdtemp(dir=os.getcwd())
print("Temporary directory: %s"%(temp_dir))
try:
# Drop any id in indexed_ids
# not in remote_ids
drop_ids = indexed_ids - remote_ids
for drop_id in drop_ids:
writer.delete_by_term('id',drop_id)
# Drop any id in indexed_ids
# not in remote_ids
drop_ids = indexed_ids - remote_ids
for drop_id in drop_ids:
writer.delete_by_term('id',drop_id)
# Update any id in indexed_ids
# and in remote_ids
update_ids = indexed_ids & remote_ids
for update_id in update_ids:
# cop out
writer.delete_by_term('id',update_id)
item = full_items[update_id]
self.add_drive_file(writer, item, temp_dir, config, update=True)
count += 1
# Update any id in indexed_ids
# and in remote_ids
update_ids = indexed_ids & remote_ids
for update_id in update_ids:
# cop out
writer.delete_by_term('id',update_id)
item = full_items[update_id]
self.add_drive_file(writer, item, temp_dir, config, update=True)
count += 1
# Add any id not in indexed_ids
# and in remote_ids
add_ids = remote_ids - indexed_ids
for add_id in add_ids:
item = full_items[add_id]
self.add_drive_file(writer, item, temp_dir, config, update=False)
count += 1
# Add any id not in indexed_ids
# and in remote_ids
add_ids = remote_ids - indexed_ids
for add_id in add_ids:
item = full_items[add_id]
self.add_drive_file(writer, item, temp_dir, config, update=False)
count += 1
except Exception as e:
print("ERROR: While adding Google Drive files to search index")
print("-"*40)
print(repr(e))
print("-"*40)
print("Continuing...")
pass
print("Cleaning temporary directory: %s"%(temp_dir))
subprocess.call(['rm','-fr',temp_dir])
@@ -686,12 +806,6 @@ class Search:
Update the search index using a collection of
Github repo issues and comments.
"""
# Updated algorithm:
# - get set of indexed ids
# - get set of remote ids
# - drop indexed ids not in remote ids
# - index all remote ids
# Get the set of indexed ids:
# ------
indexed_issues = set()
@@ -772,12 +886,6 @@ class Search:
files (and, separately, Markdown files) from
a Github repo.
"""
# Updated algorithm:
# - get set of indexed ids
# - get set of remote ids
# - drop indexed ids not in remote ids
# - index all remote ids
# Get the set of indexed ids:
# ------
indexed_ids = set()
@@ -896,12 +1004,6 @@ class Search:
RELEASE THE SPIDER!!!
"""
# Algorithm:
# - get set of indexed ids
# - get set of remote ids
# - drop indexed ids not in remote ids
# - index all remote ids
# Get the set of indexed ids:
# ------
indexed_ids = set()
@@ -919,16 +1021,17 @@ class Search:
# ask spider to crawl the archives
spider.crawl_group_archives()
# now spider.archives is a list of dictionaries
# that each represent a thread:
# thread = {
# 'permalink' : permalink,
# 'subject' : subject,
# 'original_sender' : original_sender,
# 'content' : full_content
# }
# now spider.archives is a dictionary
# with one key per thread ID,
# and a value set to the payload:
# '<thread-id>' : {
# 'permalink' : permalink,
# 'subject' : subject,
# 'original_sender' : original_sender,
# 'content' : full_content
# }
#
# It is hard to reliablly extract more information
# It is hard to reliably extract more information
# than that from the email thread.
writer = self.ix.writer()
@@ -958,6 +1061,75 @@ class Search:
print("Done, updated %d Groups.io email threads in the index" % count)
# ------------------------------
# Disqus Comments
def update_index_disqus(self, disqus_token, config):
"""
Update the search index using a collection of
Disqus comment threads from the dcppc-internal
forum.
"""
# Updated algorithm:
# - get set of indexed ids
# - get set of remote ids
# - drop all indexed ids
# - index all remote ids
# Get the set of indexed ids:
# --------------------
indexed_ids = set()
p = QueryParser("kind", schema=self.ix.schema)
q = p.parse("disqus")
with self.ix.searcher() as s:
results = s.search(q,limit=None)
for result in results:
indexed_ids.add(result['id'])
# Get the set of remote ids:
# ------
spider = DisqusCrawler(disqus_token,'dcppc-internal')
# ask spider to crawl disqus comments
spider.crawl_threads()
# spider.comments will be a dictionary
# with keys as thread IDs and values as
# a dictionary item
writer = self.ix.writer()
count = 0
# archives is a dictionary
# keys are IDs (urls)
# values are dictionaries
threads = spider.get_threads()
# Start by collecting all the things
remote_ids = set()
for k in threads.keys():
remote_ids.add(k)
# drop indexed_ids
for drop_id in indexed_ids:
writer.delete_by_term('id',drop_id)
# add remote_ids
for add_id in remote_ids:
item = threads[add_id]
self.add_disqusthread(writer, item, config, update=False)
count += 1
writer.commit()
print("Done, updated %d Disqus comment threads in the index" % count)
# ---------------------------------
# Search results bundler
@@ -1044,6 +1216,7 @@ class Search:
"ghfile" : None,
"markdown" : None,
"emailthread" : None,
"disqus" : None,
"total" : None
}
for key in counts.keys():
@@ -1074,7 +1247,9 @@ class Search:
elif doctype=='issue':
item_keys = ['title','repo_name','repo_url','url','created_time','modified_time']
elif doctype=='emailthread':
item_keys = ['title','owner_name','url']
item_keys = ['title','owner_name','url','created_time','modified_time']
elif doctype=='disqus':
item_keys = ['title','created_time','url']
elif doctype=='ghfile':
item_keys = ['title','repo_name','repo_url','url']
elif doctype=='markdown':
@@ -1091,11 +1266,7 @@ class Search:
for r in results:
d = {}
for k in item_keys:
if k=='created_time' or k=='modified_time':
#d[k] = r[k]
d[k] = dateutil.parser.parse(r[k]).strftime("%Y-%m-%d")
else:
d[k] = r[k]
d[k] = r[k]
json_results.append(d)
return json_results
@@ -1108,7 +1279,16 @@ class Search:
query_string = " ".join(query_list)
query = None
if ":" in query_string:
query = QueryParser("content", self.schema).parse(query_string)
#query = QueryParser("content",
# self.schema
#).parse(query_string)
query = QueryParser("content",
self.schema,
termclass=query.Variations
)
query.add_plugin(DateParserPlugin(free=True))
query = query.parse(query_string)
elif len(fields) == 1 and fields[0] == "filename":
pass
elif len(fields) == 2:
@@ -1116,9 +1296,12 @@ class Search:
else:
# If the user does not specify a field,
# these are the fields that are actually searched
fields = ['title', 'content','owner_name','owner_email','url']
fields = ['title', 'content','owner_name','owner_email','url','created_date','modified_date']
if not query:
query = MultifieldParser(fields, schema=self.ix.schema).parse(query_string)
query = MultifieldParser(fields, schema=self.ix.schema)
query.add_plugin(DateParserPlugin(free=True))
query = query.parse(query_string)
#query = MultifieldParser(fields, schema=self.ix.schema).parse(query_string)
parsed_query = "%s" % query
print("query: %s" % parsed_query)
results = searcher.search(query, terms=False, scored=True, groupedby="kind")

28
config_centillion.py Normal file
View File

@@ -0,0 +1,28 @@
config = {
"repositories" : [
"dcppc/project-management",
"dcppc/nih-demo-meetings",
"dcppc/internal",
"dcppc/organize",
"dcppc/dcppc-bot",
"dcppc/full-stacks",
"dcppc/design-guidelines-discuss",
"dcppc/dcppc-deliverables",
"dcppc/dcppc-milestones",
"dcppc/crosscut-metadata",
"dcppc/lucky-penny",
"dcppc/dcppc-workshops",
"dcppc/metadata-matrix",
"dcppc/data-stewards",
"dcppc/dcppc-phase1-demos",
"dcppc/apis",
"dcppc/2018-june-workshop",
"dcppc/2018-july-workshop",
"dcppc/2018-august-workshop",
"dcppc/2018-september-workshop",
"dcppc/design-guidelines",
"dcppc/2018-may-workshop",
"dcppc/centillion"
]
}

153
disqus_util.py Normal file
View File

@@ -0,0 +1,153 @@
import os, re
import requests
import json
import dateutil.parser
from pprint import pprint
"""
Convenience class wrapper for Disqus comments.
This requires that the user provide either their
API OAuth application credentials (in which case
a user needs to authenticate with the application
so it can access the comments that they can see)
or user credentials from a previous login.
"""
class DisqusCrawler(object):
def __init__(self,
credentials,
group_name):
self.credentials = credentials
self.group_name = group_name
self.crawled_comments = False
self.threads = None
def get_threads(self):
"""
Return a list of dictionaries containing
entries for each comment thread in the given
disqus forum.
"""
return self.threads
def crawl_threads(self):
"""
This will use the API to get every thread,
and will iterate through every thread to
get every comment thread.
"""
# The money shot
threads = {}
# list all threads
list_threads_url = 'https://disqus.com/api/3.0/threads/list.json'
# list all posts (comments)
list_posts_url = 'https://disqus.com/api/3.0/threads/listPosts.json'
base_params = dict(
api_key=self.credentials,
forum=self.group_name
)
# prepare url params
params = {}
for k in base_params.keys():
params[k] = base_params[k]
# make api call (first loop in fencepost)
results = requests.request('GET', list_threads_url, params=params).json()
cursor = results['cursor']
responses = results['response']
while True:
for response in responses:
if '127.0.0.1' not in response['link'] and 'localhost' not in response['link']:
# Save thread info
thread_id = response['id']
thread_count = response['posts']
print("Working on thread %s (%d posts)"%(thread_id,thread_count))
if thread_count > 0:
# prepare url params
params_comments = {}
for k in base_params.keys():
params_comments[k] = base_params[k]
params_comments['thread'] = thread_id
# make api call
results_comments = requests.request('GET', list_posts_url, params=params_comments).json()
cursor_comments = results_comments['cursor']
responses_comments = results_comments['response']
# Save comments for this thread
thread_comments = []
while True:
for comment in responses_comments:
# Save comment info
print(" + %s"%(comment['message']))
thread_comments.append(comment['message'])
if cursor_comments['hasNext']:
# Prepare for the next URL call
params_comments = {}
for k in base_params.keys():
params_comments[k] = base_params[k]
params_comments['thread'] = thread_id
params_comments['cursor'] = cursor_comments['next']
# Make the next URL call
results_comments = requests.request('GET', list_posts_url, params=params_comments).json()
cursor_comments = results_comments['cursor']
responses_comments = results_comments['response']
else:
break
link = response['link']
clean_link = re.sub('data-commons.us','nihdatacommons.us',link)
# Finished working on thread.
# We need to make this value a dictionary
thread_info = dict(
id = response['id'],
created_time = dateutil.parser.parse(response['createdAt']),
title = response['title'],
forum = response['forum'],
link = clean_link,
content = "\n\n-----".join(thread_comments)
)
threads[thread_id] = thread_info
if 'hasNext' in cursor.keys() and cursor['hasNext']:
# Prepare for next URL call
params = {}
for k in base_params.keys():
params[k] = base_params[k]
params['cursor'] = cursor['next']
# Make the next URL call
results = requests.request('GET', list_threads_url, params=params).json()
cursor = results['cursor']
responses = results['response']
else:
break
self.threads = threads

View File

@@ -1,5 +1,7 @@
import requests, os, re
from bs4 import BeautifulSoup
import dateutil.parser
import datetime
class GroupsIOException(Exception):
pass
@@ -64,7 +66,7 @@ class GroupsIOArchivesCrawler(object):
## Short circuit
## for debugging purposes
#break
break
return subgroups
@@ -251,7 +253,7 @@ class GroupsIOArchivesCrawler(object):
subject = soup.find('title').text
# Extract information for the schema:
# - permalink for thread (done)
# - permalink for thread (done above)
# - subject/title (done)
# - original sender email/name (done)
# - content (done)
@@ -266,11 +268,35 @@ class GroupsIOArchivesCrawler(object):
pass
else:
# found an email!
# this is a maze, thanks groups.io
# this is a maze, not amazing.
# thanks groups.io!
td = tr.find('td')
divrow = td.find('div',{'class':'row'}).find('div',{'class':'pull-left'})
sender_divrow = td.find('div',{'class':'row'})
sender_divrow = sender_divrow.find('div',{'class':'pull-left'})
if (i+1)==1:
original_sender = divrow.text.strip()
original_sender = sender_divrow.text.strip()
date_divrow = td.find('div',{'class':'row'})
date_divrow = date_divrow.find('div',{'class':'pull-right'})
date_divrow = date_divrow.find('font',{'class':'text-muted'})
date_divrow = date_divrow.find('script').text
try:
time_seconds = re.search(' [0-9]{1,} ',date_divrow).group(0)
time_seconds = time_seconds.strip()
# Thanks groups.io for the weird date formatting
time_seconds = time_seconds[:10]
mmicro_seconds = time_seconds[10:]
if (i+1)==1:
created_time = datetime.datetime.utcfromtimestamp(int(time_seconds))
modified_time = datetime.datetime.utcfromtimestamp(int(time_seconds))
else:
modified_time = datetime.datetime.utcfromtimestamp(int(time_seconds))
except AttributeError:
created_time = None
modified_time = None
for div in td.find_all('div'):
if div.has_attr('id'):
@@ -299,7 +325,10 @@ class GroupsIOArchivesCrawler(object):
thread = {
'permalink' : permalink,
'created_time' : created_time,
'modified_time' : modified_time,
'subject' : subject,
'subgroup' : subgroup_name,
'original_sender' : original_sender,
'content' : full_content
}
@@ -324,11 +353,13 @@ class GroupsIOArchivesCrawler(object):
results = []
for row in rows:
# We don't care about anything except title and ugly link
# This is where we extract
# a list of thread titles
# and corresponding links.
subject = row.find('span',{'class':'subject'})
title = subject.get_text()
link = row.find('a')['href']
#print(title)
results.append((title,link))
return results

89
hypothesis_util.py Normal file
View File

@@ -0,0 +1,89 @@
import requests
import json
import os
def get_headers():
if 'HYPOTHESIS_TOKEN' in os.environ:
token = os.environ['HYPOTHESIS_TOKEN']
else:
raise Exception("Need to specify Hypothesis token with HYPOTHESIS_TOKEN env var")
auth_header = 'Bearer %s'%(token)
return {'Authorization': auth_header}
def basic_auth():
url = ' https://hypothes.is/api'
# Get the authorization header
headers = get_headers()
# Make the request
response = requests.get(url, headers=headers)
if response.status_code==200:
# Interpret results as JSON
dat = response.json()
print(json.dumps(dat, indent=4))
else:
print("Response status code was not OK: %d"%(response.status_code))
def list_annotations():
# kEaohJC9Eeiy_UOozkpkyA
url = 'https://hypothes.is/api/annotations/kEaohJC9Eeiy_UOozkpkyA'
# Get the authorization header
headers = get_headers()
# Make the request
response = requests.get(url, headers=headers)
if response.status_code==200:
# Interpret results as JSON
dat = response.json()
print(json.dumps(dat, indent=4))
else:
print("Response status code was not OK: %d"%(response.status_code))
def search_annotations():
url = ' https://hypothes.is/api/search'
# Get the authorization header
headers = get_headers()
# Set query params
params = dict(
url = '*pilot.nihdatacommons.us*',
limit = 200
)
#http://pilot.nihdatacommons.us/organize/CopperInternalDeliveryWorkFlow/',
# Make the request
response = requests.get(url, headers=headers, params=params)
if response.status_code==200:
# Interpret results as JSON
dat = response.json()
print(json.dumps(dat, indent=4))
else:
print("Response status code was not OK: %d"%(response.status_code))
if __name__=="__main__":
search_annotations()

133
static/feedback.js Normal file
View File

@@ -0,0 +1,133 @@
// submitting form with modal:
// https://stackoverflow.com/a/29068742
//
// closing a bootstrap modal with submit button:
// https://stackoverflow.com/a/33478107
//
// flask post data as json:
// https://stackoverflow.com/a/16664376
/* this function is called when the user submits
* the feedback form. it submits a post request
* to the flask server, which squirrels away the
* feedback in a file.
*/
function submit_feedback() {
// this function is called when submit button clicked
// algorithm:
// - check if text box has content
// - check if happy/sad filled out
var smile_active = $('#modal-feedback-smile-div').hasClass('smile-active');
var frown_active = $('#modal-feedback-frown-div').hasClass('frown-active');
if( !( smile_active || frown_active ) ) {
alert('Please pick the smile or the frown.')
} else if( $('#modal-feedback-textarea').val()=='' ) {
alert('Please provide us with some feedback.')
} else {
var user_sentiment = '';
if(smile_active) {
user_sentiment = 'smile';
} else {
user_sentiment = 'frown';
}
var escaped_text = $('#modal-feedback-textarea').val();
// prepare form data
var data = {
sentiment : user_sentiment,
content : escaped_text
};
// post the form. the callback function resets the form
$.post("/feedback",
data,
function(response) {
$('#myModal').modal('hide');
$('#myModalForm')[0].reset();
add_alert(response);
frown_unclick();
smile_unclick();
});
}
}
function add_alert(response) {
str = ""
str += '<div id="feedback-messages-container" class="container">';
if (response['status']=='ok') {
// if status is ok, use alert-success
str += ' <div id="feedback-messages-alert" class="alert alert-success alert-dismissible fade in">';
} else {
// otherwise use alert-danger
str += ' <div id="feedback-messages-alert" class="alert alert-danger alert-dismissible fade in">';
}
str += ' <a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>';
str += ' <div id="feedback-messages-contianer" class="container-fluid">';
str += ' <div id="feedback-messages-div" class="co-xs-12">';
str += ' <p>'
str += response['message'];
str += ' </p>';
str += ' </div>';
str += ' </div>';
str += '</div>';
$('div#messages').append(str);
}
/* for those particularly wordy users... limit feedback to 1000 chars */
function cool_it() {
if($('#modal-feedback-textarea').val().length > 1100 ){
$('#modal-too-long').show();
} else {
$('#modal-too-long').hide();
}
}
/* smiley functions */
function smile_click() {
$('#modal-feedback-smile-div').addClass('smile-active');
$('#modal-feedback-smile-icon').addClass('smile-active');
}
function frown_click() {
$('#modal-feedback-frown-div').addClass('frown-active');
$('#modal-feedback-frown-icon').addClass('frown-active');
}
function smile_unclick() {
$('#modal-feedback-smile-div').removeClass('smile-active');
$('#modal-feedback-smile-icon').removeClass('smile-active');
}
function frown_unclick() {
$('#modal-feedback-frown-div').removeClass('frown-active');
$('#modal-feedback-frown-icon').removeClass('frown-active');
}
function smile() {
frown_unclick();
smile_click();
}
function frown() {
smile_unclick();
frown_click();
}
/* for those particularly wordy users... limit feedback to 1100 chars */
// how to check n characters in a textarea
// https://stackoverflow.com/a/19934613
/*
$(document).ready(function() {
$('#modal-feedback-textarea').on('change',function(event) {
if($('#modal-feedback-textarea').val().length > 1100 ){
$('#modal-too-long').show();
} else {
$('#modal-too-long').hide();
}
});
}
*/

View File

@@ -141,7 +141,7 @@ sSearch:"",bRegex:!1,bSmart:!0};m.models.oRow={nTr:null,anCells:null,_aData:[],_
sTitle:null,sType:null,sWidth:null,sWidthOrig:null};m.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10,25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bJQueryUI:!1,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1,bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,
fnFormatNumber:function(a){return a.toString().replace(/\B(?=(\d{3})+(?!\d))/g,this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null,fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(a){try{return JSON.parse((-1===a.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+a.sInstance+"_"+location.pathname))}catch(b){}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(a,b){try{(-1===
a.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+a.sInstance+"_"+location.pathname,JSON.stringify(b))}catch(c){}},fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"},oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",
sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:h.extend({},m.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",
sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search Metadata:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:h.extend({},m.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",
renderer:null,rowId:"DT_RowId"};X(m.defaults);m.defaults.column={aDataSort:null,iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null};X(m.defaults.column);m.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,
bSort:null,bSortMulti:null,bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null,iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[],aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],
aoFooterCallback:[],aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[],aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button",iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,bAjaxDataGet:!0,jqXHR:null,json:k,oAjaxData:k,

View File

@@ -22,6 +22,7 @@ var initIssuesTable = false;
var initGhfilesTable = false;
var initMarkdownTable = false;
var initEmailthreadsTable = false;
var initDisqusTable = false;
$(document).ready(function() {
var url_string = document.location.toString();
@@ -32,10 +33,6 @@ $(document).ready(function() {
load_gdoc_table();
var divList = $('div#collapseDrive').addClass('in');
} else if (d==='emailthread') {
load_emailthreads_table();
var divList = $('div#collapseThreads').addClass('in');
} else if (d==='issue') {
load_issue_table();
var divList = $('div#collapseIssues').addClass('in');
@@ -48,6 +45,14 @@ $(document).ready(function() {
load_markdown_table();
var divList = $('div#collapseMarkdown').addClass('in');
} else if (d==='emailthread') {
load_emailthreads_table();
var divList = $('div#collapseThreads').addClass('in');
} else if (d==='disqus') {
load_disqusthreads_table();
var divList = $('div#collapseDisqus').addClass('in');
}
});
@@ -77,9 +82,9 @@ function load_gdoc_table(){
if(!initGdocTable) {
var divList = $('div#collapseDrive').attr('class');
if (divList.indexOf('in') !== -1) {
console.log('Closing Google Drive master list');
//console.log('Closing Google Drive master list');
} else {
console.log('Opening Google Drive master list');
//console.log('Opening Google Drive master list');
$.getJSON("/list/gdoc", function(result){
@@ -123,18 +128,9 @@ function load_gdoc_table(){
lengthMenu: [50,100,250,500]
});
// Get the search filter section and search box
var searchsec = $(filtlabel).find('label');
var searchbox = searchsec.find('input');
// Replace search filter section text,
// then re-add the removed search box
searchsec.text('Search Metadata: ');
searchsec.append(searchbox);
initGdocTable = true
});
console.log('Finished loading Google Drive master list');
//console.log('Finished loading Google Drive master list');
}
}
}
@@ -146,9 +142,9 @@ function load_issue_table(){
if(!initIssuesTable) {
var divList = $('div#collapseIssues').attr('class');
if (divList.indexOf('in') !== -1) {
console.log('Closing Github issues master list');
//console.log('Closing Github issues master list');
} else {
console.log('Opening Github issues master list');
//console.log('Opening Github issues master list');
$.getJSON("/list/issue", function(result){
var r = new Array(), j = -1, size=result.length;
@@ -190,18 +186,9 @@ function load_issue_table(){
lengthMenu: [50,100,250,500]
});
// Get the search filter section and search box
var searchsec = $(filtlabel).find('label');
var searchbox = searchsec.find('input');
// Replace search filter section text,
// then re-add the removed search box
searchsec.text('Search Metadata: ');
searchsec.append(searchbox);
initIssuesTable = true;
});
console.log('Finished loading Github issues master list');
//console.log('Finished loading Github issues master list');
}
}
}
@@ -213,13 +200,13 @@ function load_ghfile_table(){
if(!initGhfilesTable) {
var divList = $('div#collapseFiles').attr('class');
if (divList.indexOf('in') !== -1) {
console.log('Closing Github files master list');
//console.log('Closing Github files master list');
} else {
console.log('Opening Github files master list');
//console.log('Opening Github files master list');
$.getJSON("/list/ghfile", function(result){
console.log("-----------");
console.log(result);
//console.log("-----------");
//console.log(result);
var r = new Array(), j = -1, size=result.length;
r[++j] = '<thead>'
r[++j] = '<tr class="header-row">';
@@ -253,18 +240,9 @@ function load_ghfile_table(){
lengthMenu: [50,100,250,500]
});
// Get the search filter section and search box
var searchsec = $(filtlabel).find('label');
var searchbox = searchsec.find('input');
// Replace search filter section text,
// then re-add the removed search box
searchsec.text('Search Metadata: ');
searchsec.append(searchbox);
initGhfilesTable = true;
});
console.log('Finished loading Github file list');
//console.log('Finished loading Github file list');
}
}
}
@@ -276,9 +254,9 @@ function load_markdown_table(){
if(!initMarkdownTable) {
var divList = $('div#collapseMarkdown').attr('class');
if (divList.indexOf('in') !== -1) {
console.log('Closing Github markdown master list');
//console.log('Closing Github markdown master list');
} else {
console.log('Opening Github markdown master list');
//console.log('Opening Github markdown master list');
$.getJSON("/list/markdown", function(result){
var r = new Array(), j = -1, size=result.length;
@@ -314,18 +292,9 @@ function load_markdown_table(){
lengthMenu: [50,100,250,500]
});
// Get the search filter section and search box
var searchsec = $(filtlabel).find('label');
var searchbox = searchsec.find('input');
// Replace search filter section text,
// then re-add the removed search box
searchsec.text('Search Metadata: ');
searchsec.append(searchbox);
initMarkdownTable = true;
});
console.log('Finished loading Markdown list');
//console.log('Finished loading Markdown list');
}
}
}
@@ -338,9 +307,9 @@ function load_emailthreads_table(){
if(!initEmailthreadsTable) {
var divList = $('div#collapseThreads').attr('class');
if (divList.indexOf('in') !== -1) {
console.log('Closing Groups.io email threads master list');
//console.log('Closing Groups.io email threads master list');
} else {
console.log('Opening Groups.io email threads master list');
//console.log('Opening Groups.io email threads master list');
$.getJSON("/list/emailthread", function(result){
var r = new Array(), j = -1, size=result.length;
@@ -374,18 +343,59 @@ function load_emailthreads_table(){
lengthMenu: [50,100,250,500]
});
// Get the search filter section and search box
var searchsec = $(filtlabel).find('label');
var searchbox = searchsec.find('input');
// Replace search filter section text,
// then re-add the removed search box
searchsec.text('Search Metadata: ');
searchsec.append(searchbox);
initEmailthreadsTable = true;
});
console.log('Finished loading Groups.io email threads list');
//console.log('Finished loading Groups.io email threads list');
}
}
}
// ------------------------
// Disqus Comment Threads
function load_disqusthreads_table(){
if(!initEmailthreadsTable) {
var divList = $('div#collapseDisqus').attr('class');
if (divList.indexOf('in') !== -1) {
//console.log('Closing Disqus comment threads master list');
} else {
//console.log('Opening Disqus comment threads master list');
$.getJSON("/list/disqus", function(result){
var r = new Array(), j = -1, size=result.length;
r[++j] = '<thead>'
r[++j] = '<tr class="header-row">';
r[++j] = '<th width="70%">Page Title</th>';
r[++j] = '<th width="30%">Created</th>';
r[++j] = '</tr>';
r[++j] = '</thead>'
r[++j] = '<tbody>'
for (var i=0; i<size; i++){
r[++j] ='<tr><td>';
r[++j] = '<a href="' + result[i]['url'] + '" target="_blank">'
r[++j] = result[i]['title'];
r[++j] = '</a>'
r[++j] = '</td><td>';
r[++j] = result[i]['created_time'];
r[++j] = '</td></tr>';
}
r[++j] = '</tbody>'
// Construct names of id tags
var doctype = 'disqus';
var idlabel = '#' + doctype + '-master-list';
var filtlabel = idlabel + '_filter';
// Initialize the DataTable
$(idlabel).html(r.join(''));
$(idlabel).DataTable({
responsive: true,
lengthMenu: [50,100,250,500]
});
initDisqusTable = true;
});
console.log('Finished loading Disqus comment threads list');
}
}
}

View File

@@ -31,7 +31,7 @@ $(document).ready(function() {
aTargets : [2]
}
],
lengthMenu: [50,100,250,500]
lengthMenu: [10,20,50,100]
});
console.log('Finished loading search results list');

View File

@@ -1,3 +1,61 @@
#modal-too-long {
visibility: hidden;
}
/* feedback smileys */
#modal-feedback-smile-icon,
#modal-feedback-frown-icon {
padding-left: 100px;
padding-right: 100px;
padding-top: 20px;
padding-bottom: 20px;
}
div.smile-active {
background-color: #2b2;
}
i.smile-active {
color: #fff;
}
div.frown-active {
background-color: #b22;
}
i.frown-active {
color: #fff;
}
/* feedback text area */
#modal-feedback-textarea {
width: 100%;
}
/* feedback buttons */
button.close {
font-size: 35px;
}
button#submit-feedback-btn {
width: 250px;
}
button#feedback:hover {
opacity: 1.0;
filter: alpha(opacity=100); /* For IE8 and earlier */
}
button#feedback {
opacity: 0.5;
filter: alpha(opacity=50); /* For IE8 and earlier */
width: 180px;
height: 50px;
position: fixed;
z-index: 999;
right: 120px;
bottom: 10px;
}
/* search results table */
td#search-results-score-col,
td#search-results-type-col {
width: 100px;

View File

@@ -1,4 +1,5 @@
{% extends "layout.html" %}
{% set active_page = "403" %}
{% block body %}
<div class="container">

View File

@@ -1,4 +1,5 @@
{% extends "layout.html" %}
{% set active_page = "404" %}
{% block body %}
<div class="container">

25
templates/banner.html Normal file
View File

@@ -0,0 +1,25 @@
<div class="container" id="banner-container">
{#
banner image
#}
<div class="row" id="banner-row">
<div class="col12sm" id="banner-col">
<center>
<a id="banner-a" href="{{ url_for('search')}}?query=&fields=">
<img id="banner-img" src="{{ url_for('static', filename='centillion_white.png') }}">
</a>
</center>
</div>
</div>
{% if config['TAGLINE'] %}
<div class="row" id="tagline-row">
<div class="col12sm" id="tagline-col">
<center>
<h2 id="tagline-tagline"> {{config['TAGLINE']}} </h2>
</center>
</div>
</div>
{% endif %}
</div>

View File

@@ -1,4 +1,5 @@
{% extends "layout.html" %}
{% set active_page = "control_panel" %}
{% block body %}
<hr />
@@ -53,6 +54,8 @@
</p>
<p><a href="{{ url_for('update_index',run_which='emailthreads') }}" class="btn btn-large btn-danger btn-reindex-type">Update Groups.io Email Threads Index</a>
</p>
<p><a href="{{ url_for('update_index',run_which='disqus') }}" class="btn btn-large btn-danger btn-reindex-type">Update Disqus Comment Threads Index</a>
</p>
</div>
</div>
</div>

View File

@@ -0,0 +1,14 @@
<div id="messages">
{% with messages = get_flashed_messages() %}
{% if messages %}
<div class="container" id="flashed-messages-container">
<div class="alert alert-success alert-dismissible fade in">
<a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>
{% for message in messages %}
<p class="lead">{{ message }}</p>
{% endfor %}
</div>
</div>
{% endif %}
{% endwith %}
</div>

View File

@@ -1,4 +1,5 @@
{% extends "layout.html" %}
{% set active_page = "landing" %}
{% block body %}
<div class="container">
<div class="row">

View File

@@ -10,6 +10,7 @@
<script src="{{ url_for('static', filename='bootstrap.min.js') }}"></script>
<script src="{{ url_for('static', filename='master_list.js') }}"></script>
<script src="{{ url_for('static', filename='search_list.js') }}"></script>
<script src="{{ url_for('static', filename='feedback.js') }}"></script>
{# ########## dataTables plugin ############ #}
@@ -23,52 +24,43 @@
{# ########## github fork corner ############ #}
<div id="master-div">
<div>
{#
flashed messages
#}
{% include "flashed_messages.html" %}
{% with messages = get_flashed_messages() %}
{% if messages %}
<div class="container">
<div class="alert alert-success alert-dismissible">
<a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>
<ul class=flashes>
{% for message in messages %}
<li>{{ message }}</li>
{% endfor %}
</ul>
</div>
</div>
{% endif %}
{% endwith %}
{#
banner image
#}
{% include "banner.html" %}
<div class="container">
{#
banner image
#}
<div class="row">
<div class="col12sm">
<center>
<a href="{{ url_for('search')}}?query=&fields=">
<img src="{{ url_for('static', filename='centillion_white.png') }}">
</a>
{#
need a tag line
#}
{% if config['TAGLINE'] %}
<h2><a href="{{ url_for('search')}}?query=&fields=">
{{config['TAGLINE']}}
</a></h2>
{% endif %}
</center>
</div>
</div>
</div>
{#
feedback modal
#}
{% include "modal.html" %}
{% block body %}{% endblock %}
</div>
<a href="https://github.com/dcppc/centillion" class="github-corner" aria-label="View source on Github"><svg width="80" height="80" viewBox="0 0 250 250" style="fill:#151513; color:#fff; position: absolute; top: 0; border: 0; right: 0;" aria-hidden="true"><path d="M0,0 L115,115 L130,115 L142,142 L250,250 L250,0 Z"></path><path d="M128.3,109.0 C113.8,99.7 119.0,89.6 119.0,89.6 C122.0,82.7 120.5,78.6 120.5,78.6 C119.2,72.0 123.4,76.3 123.4,76.3 C127.3,80.9 125.5,87.3 125.5,87.3 C122.9,97.6 130.6,101.9 134.4,103.2" fill="currentColor" style="transform-origin: 130px 106px;" class="octo-arm"></path><path d="M115.0,115.0 C114.9,115.1 118.7,116.5 119.8,115.4 L133.7,101.6 C136.9,99.2 139.9,98.4 142.2,98.6 C133.8,88.0 127.5,74.4 143.8,58.0 C148.5,53.4 154.0,51.2 159.7,51.0 C160.3,49.4 163.2,43.6 171.4,40.1 C171.4,40.1 176.1,42.5 178.8,56.2 C183.1,58.6 187.2,61.8 190.9,65.4 C194.5,69.0 197.7,73.2 200.1,77.6 C213.8,80.2 216.3,84.9 216.3,84.9 C212.7,93.1 206.9,96.0 205.4,96.6 C205.1,102.4 203.0,107.8 198.3,112.5 C181.9,128.9 168.3,122.5 157.7,114.1 C157.9,116.9 156.7,120.9 152.7,124.9 L141.0,136.5 C139.8,137.7 141.6,141.9 141.8,141.8 Z" fill="currentColor" class="octo-body"></path></svg></a><style>.github-corner:hover .octo-arm{animation:octocat-wave 560ms ease-in-out}@keyframes octocat-wave{0%,100%{transform:rotate(0)}20%,60%{transform:rotate(-25deg)}40%,80%{transform:rotate(10deg)}}@media (max-width:500px){.github-corner:hover .octo-arm{animation:none}.github-corner .octo-arm{animation:octocat-wave 560ms ease-in-out}}</style>
{% if active_page=="search" or active_page=="master_list" %}
{# feedback button #}
<button id="feedback" type="button"
data-toggle="modal"
data-target="#myModal"
class="btn btn-lg">Send Feedback</button>
{# vertical spacing before the bottom, b/c of button #}
<div id="footer-whitespace" class="container">
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
</div>
{% endif %}
<a id="github-corner" href="https://github.com/dcppc/centillion" class="github-corner" aria-label="View source on Github"><svg width="80" height="80" viewBox="0 0 250 250" style="fill:#151513; color:#fff; position: absolute; top: 0; border: 0; right: 0;" aria-hidden="true"><path d="M0,0 L115,115 L130,115 L142,142 L250,250 L250,0 Z"></path><path d="M128.3,109.0 C113.8,99.7 119.0,89.6 119.0,89.6 C122.0,82.7 120.5,78.6 120.5,78.6 C119.2,72.0 123.4,76.3 123.4,76.3 C127.3,80.9 125.5,87.3 125.5,87.3 C122.9,97.6 130.6,101.9 134.4,103.2" fill="currentColor" style="transform-origin: 130px 106px;" class="octo-arm"></path><path d="M115.0,115.0 C114.9,115.1 118.7,116.5 119.8,115.4 L133.7,101.6 C136.9,99.2 139.9,98.4 142.2,98.6 C133.8,88.0 127.5,74.4 143.8,58.0 C148.5,53.4 154.0,51.2 159.7,51.0 C160.3,49.4 163.2,43.6 171.4,40.1 C171.4,40.1 176.1,42.5 178.8,56.2 C183.1,58.6 187.2,61.8 190.9,65.4 C194.5,69.0 197.7,73.2 200.1,77.6 C213.8,80.2 216.3,84.9 216.3,84.9 C212.7,93.1 206.9,96.0 205.4,96.6 C205.1,102.4 203.0,107.8 198.3,112.5 C181.9,128.9 168.3,122.5 157.7,114.1 C157.9,116.9 156.7,120.9 152.7,124.9 L141.0,136.5 C139.8,137.7 141.6,141.9 141.8,141.8 Z" fill="currentColor" class="octo-body"></path></svg></a><style>.github-corner:hover .octo-arm{animation:octocat-wave 560ms ease-in-out}@keyframes octocat-wave{0%,100%{transform:rotate(0)}20%,60%{transform:rotate(-25deg)}40%,80%{transform:rotate(10deg)}}@media (max-width:500px){.github-corner:hover .octo-arm{animation:none}.github-corner .octo-arm{animation:octocat-wave 560ms ease-in-out}}</style>

View File

@@ -1,4 +1,5 @@
{% extends "layout.html" %}
{% set active_page = "master_list" %}
{% block body %}
<hr />
@@ -8,8 +9,9 @@
<div class="row">
{#
# google drive files panel
#}
# google drive files panel
#}
<a name="gdoc"></a>
<div class="row">
<div class="panel">
<div class="panel-group" id="accordionDrive" role="tablist" aria-multiselectable="true">
@@ -45,8 +47,9 @@
{#
# github issue panel
#}
# github issue panel
#}
<a name="issue"></a>
<div class="row">
<div class="panel">
<div class="panel-group" id="accordionIssues" role="tablist" aria-multiselectable="true">
@@ -84,8 +87,9 @@
{#
# github file panel
#}
# github file panel
#}
<a name="ghfile"></a>
<div class="row">
<div class="panel">
<div class="panel-group" id="accordionFiles" role="tablist" aria-multiselectable="true">
@@ -121,8 +125,9 @@
{#
# gh markdown file panel
#}
# gh markdown file panel
#}
<a name="markdown"></a>
<div class="row">
<div class="panel">
<div class="panel-group" id="accordionMarkdown" role="tablist" aria-multiselectable="true">
@@ -159,8 +164,9 @@
{#
# groups.io
#}
# groups.io email threads
#}
<a name="emailthread"></a>
<div class="row">
<div class="panel">
<div class="panel-group" id="accordionThreads" role="tablist" aria-multiselectable="true">
@@ -194,6 +200,42 @@
</div>
</div>
{#
# disqus comment threads
#}
<a name="disqus"></a>
<div class="row">
<div class="panel">
<div class="panel-group" id="accordionDisqus" role="tablist" aria-multiselectable="true">
<div class="panel panel-default">
<div class="panel-heading" role="tab" id="disqus">
<h2 class="masterlist-header">
<a class="collapsed"
role="button"
onClick="load_disqusthreads_table()"
data-toggle="collapse"
data-parent="#accordionDisqus"
href="#collapseDisqus"
aria-expanded="true"
aria-controls="collapseDisqus">
Disqus Comment Threads <small>indexed by centillion</small>
</a>
</h2>
</div>
<div id="collapseDisqus" class="panel-collapse collapse" role="tabpanel"
aria-labelledby="disqus">
<div class="panel-body">
<table class="table table-striped" id="disqus-master-list">
</table>
</div>
</div>
</div>
</div>
</div>
</div>
</div>

51
templates/modal.html Normal file
View File

@@ -0,0 +1,51 @@
<div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel">
<form id="myModalForm" method="post">
<div class="modal-dialog" role="document">
<div id="myModal-content" class="modal-content">
<div id="myModal-header" class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">&times;</span>
</button>
<h4 class="modal-title" id="myModalLabel">
Send us feedback!
</h4>
</div>
<div id="myModal-body" class="modal-body">
<div id="modal-feedback-smile-frown-container" class="container-fluid">
<div id="modal-feedback-smile-div" class="col-xs-6 text-center"
onClick="smile()">
<i id="modal-feedback-smile-icon" class="fa fa-smile-o fa-4x" aria-hidden="true"></i>
</div>
<div id="modal-feedback-frown-div" class="col-xs-6 text-center"
onClick="frown()">
<i id="modal-feedback-frown-icon" class="fa fa-frown-o fa-4x" aria-hidden="true"></i>
</div>
</div>
<div class="container-fluid">
<p>&nbsp;</p>
</div>
<div id="modal-feedback-textarea-container" class="container-fluid">
<textarea id="modal-feedback-textarea" rows="6"></textarea>
</div>
<div id="modal-too-long" class="container-fluid" >
<p id="modal-too-long-text" class="lead">Please limit the length of your feedback. Thank you in advance!</p>
</div>
</div>
<div id="myModal-footer" class="modal-footer">
<div class="text-center">
<button id="submit-feedback-btn" type="button"
onClick="submit_feedback()"
class="btn btn-lg btn-primary">
Send
</button>
</div>
</div>
</div>
</div>
</form>
</div>

View File

@@ -1,7 +1,8 @@
{% extends "layout.html" %}
{% set active_page = "search" %}
{% block body %}
<div class="container">
<div id="search-bar-container" class="container">
<div class="row">
<div class="col-xs-12">
@@ -12,7 +13,11 @@
<p><button id="the-big-one" type="submit" style="font-size: 20px; padding: 10px; padding-left: 50px; padding-right: 50px;"
value="search" class="btn btn-primary">Search</button>
</p>
<p><a href="{{ url_for('search')}}?query=&fields=">[clear all results]</a>
{% if parsed_query %}
<p><a href="{{ url_for('search')}}?query=&fields=">[clear all results]</a>
{% endif %}
</p>
</form>
</center>
@@ -20,17 +25,10 @@
</div>
</div>
<div class="container">
<div class="row">
<div style="height: 20px;"><p>&nbsp;</p></div>
{% if directories %}
<div class="col-xs-12 info directories-cloud">
<b>File directories:</b>
{% for d in directories %}
<a href="{{url_for('search')}}?query={{d|trim}}&fields=filename">{{d|trim}}</a>
{% endfor %}
</div>
{% endif %}
<div id="info-bars-container" class="container">
<div class="row">
<ul class="list-group">
@@ -46,6 +44,9 @@
</li>
{% endif %}
{# use "if parsed_query" to check if this is
a new search or search results #}
{% if parsed_query %}
<li class="list-group-item">
<div class="container-fluid">
@@ -59,6 +60,7 @@
</li>
{% endif %}
<li class="list-group-item">
<div class="container-fluid">
<div class="row">
@@ -66,28 +68,33 @@
<b>Indexing:</b>
<span class="badge">{{totals["gdoc"]}}</span>
<a href="/master_list?doctype=gdoc">
<a href="/master_list?doctype=gdoc#gdoc">
Google Drive files
</a>,
<span class="badge">{{totals["issue"]}}</span>
<a href="/master_list?doctype=issue">
<a href="/master_list?doctype=issue#issue">
Github issues
</a>,
<span class="badge">{{totals["ghfile"]}}</span>
<a href="/master_list?doctype=ghfile">
<a href="/master_list?doctype=ghfile#ghfile">
Github files
</a>,
<span class="badge">{{totals["markdown"]}}</span>
<a href="/master_list?doctype=markdown">
<a href="/master_list?doctype=markdown#markdown">
Github Markdown files
</a>,
<span class="badge">{{totals["emailthread"]}}</span>
<a href="/master_list?doctype=emailthread">
<a href="/master_list?doctype=emailthread#emailthread">
Groups.io email threads
</a>,
<span class="badge">{{totals["disqus"]}}</span>
<a href="/master_list?doctype=disqus#disqus">
Disqus comment threads
</a>
</div>
</div>
@@ -99,7 +106,7 @@
</div>
{% if parsed_query %}
<div class="container">
<div id="search-results-container" class="container">
<div class="row">
<table id="search-results" class="table">
<thead id="search-results-header">
@@ -126,44 +133,21 @@
{% if e.kind=="gdoc" %}
{% if e.mimetype=="document" %}
<p><small>Drive Document</small</p>
<!--
<i class="fa fa-google fa-2x"></i>
<i class="fa fa-file-text fa-2x"></i>
-->
{% else %}
<p><small>Drive File</small</p>
<!--
<i class="fa fa-google fa-2x"></i>
<i class="fa fa-file-o fa-2x"></i>
-->
{% endif %}
{% elif e.kind=="issue" %}
<p><small>Issue</small</p>
<!--
<i class="fa fa-github fa-2x"></i>
<i class="fa fa-question fa-2x"></i>
-->
{% elif e.kind=="ghfile" %}
<p><small>Github File</small</p>
<!--
<i class="fa fa-github fa-2x"></i>
<i class="fa fa-file-o fa-2x"></i>
-->
{% elif e.kind=="markdown" %}
<p><small>Github Markdown</small</p>
<!--
<i class="fa fa-github fa-2x"></i>
<i class="fa fa-file-text-o fa-2x"></i>
-->
{% elif e.kind=="emailthread" %}
<p><small>Email Thread</small</p>
<!--
<i class="fa fa-envelope-o fa-2x"></i>
-->
{% else %}
<p><small>Unknown</small</p>